Nov 26 13:15:06 crc systemd[1]: Starting Kubernetes Kubelet... Nov 26 13:15:06 crc restorecon[4694]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:06 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:07 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:09 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:11 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:12 crc restorecon[4694]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:15:12 crc restorecon[4694]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 26 13:15:13 crc kubenswrapper[4747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 13:15:13 crc kubenswrapper[4747]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 26 13:15:13 crc kubenswrapper[4747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 13:15:13 crc kubenswrapper[4747]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 13:15:13 crc kubenswrapper[4747]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 26 13:15:13 crc kubenswrapper[4747]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.530962 4747 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537762 4747 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537796 4747 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537803 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537810 4747 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537816 4747 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537823 4747 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537829 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537836 4747 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537842 4747 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537848 4747 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537854 4747 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537860 4747 feature_gate.go:330] unrecognized feature gate: Example Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537865 4747 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537870 4747 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537875 4747 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537880 4747 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537886 4747 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537893 4747 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537900 4747 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537907 4747 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537913 4747 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537919 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537925 4747 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537930 4747 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537935 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537941 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537946 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537951 4747 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537956 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537961 4747 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537966 4747 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537971 4747 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537977 4747 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537992 4747 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.537997 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538003 4747 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538008 4747 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538013 4747 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538018 4747 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538025 4747 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538032 4747 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538038 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538047 4747 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538057 4747 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538080 4747 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538085 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538091 4747 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538096 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538103 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538108 4747 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538113 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538118 4747 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538125 4747 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538131 4747 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538137 4747 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538143 4747 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538148 4747 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538154 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538159 4747 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538165 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538171 4747 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538178 4747 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538184 4747 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538189 4747 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538194 4747 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538199 4747 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538205 4747 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538210 4747 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538215 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538221 4747 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.538228 4747 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539489 4747 flags.go:64] FLAG: --address="0.0.0.0" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539510 4747 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539523 4747 flags.go:64] FLAG: --anonymous-auth="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539532 4747 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539541 4747 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539548 4747 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539557 4747 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539565 4747 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539572 4747 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539579 4747 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539586 4747 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539593 4747 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539600 4747 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539607 4747 flags.go:64] FLAG: --cgroup-root="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539613 4747 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539619 4747 flags.go:64] FLAG: --client-ca-file="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539625 4747 flags.go:64] FLAG: --cloud-config="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539631 4747 flags.go:64] FLAG: --cloud-provider="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539637 4747 flags.go:64] FLAG: --cluster-dns="[]" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539645 4747 flags.go:64] FLAG: --cluster-domain="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539651 4747 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539657 4747 flags.go:64] FLAG: --config-dir="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539664 4747 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539670 4747 flags.go:64] FLAG: --container-log-max-files="5" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539679 4747 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539686 4747 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539695 4747 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539703 4747 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539713 4747 flags.go:64] FLAG: --contention-profiling="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539721 4747 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539730 4747 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539737 4747 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539745 4747 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539754 4747 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539761 4747 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539770 4747 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539778 4747 flags.go:64] FLAG: --enable-load-reader="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539787 4747 flags.go:64] FLAG: --enable-server="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539795 4747 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539806 4747 flags.go:64] FLAG: --event-burst="100" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539815 4747 flags.go:64] FLAG: --event-qps="50" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539822 4747 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539830 4747 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539837 4747 flags.go:64] FLAG: --eviction-hard="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539855 4747 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539861 4747 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539868 4747 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539875 4747 flags.go:64] FLAG: --eviction-soft="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539882 4747 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539887 4747 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539894 4747 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539900 4747 flags.go:64] FLAG: --experimental-mounter-path="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539906 4747 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539912 4747 flags.go:64] FLAG: --fail-swap-on="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539918 4747 flags.go:64] FLAG: --feature-gates="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539926 4747 flags.go:64] FLAG: --file-check-frequency="20s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539932 4747 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539939 4747 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539946 4747 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539953 4747 flags.go:64] FLAG: --healthz-port="10248" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539959 4747 flags.go:64] FLAG: --help="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539966 4747 flags.go:64] FLAG: --hostname-override="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539972 4747 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539979 4747 flags.go:64] FLAG: --http-check-frequency="20s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539986 4747 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539992 4747 flags.go:64] FLAG: --image-credential-provider-config="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.539998 4747 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540004 4747 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540010 4747 flags.go:64] FLAG: --image-service-endpoint="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540016 4747 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540022 4747 flags.go:64] FLAG: --kube-api-burst="100" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540029 4747 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540036 4747 flags.go:64] FLAG: --kube-api-qps="50" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540042 4747 flags.go:64] FLAG: --kube-reserved="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540048 4747 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540077 4747 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540085 4747 flags.go:64] FLAG: --kubelet-cgroups="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540091 4747 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540097 4747 flags.go:64] FLAG: --lock-file="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540104 4747 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540110 4747 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540116 4747 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540126 4747 flags.go:64] FLAG: --log-json-split-stream="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540133 4747 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540139 4747 flags.go:64] FLAG: --log-text-split-stream="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540146 4747 flags.go:64] FLAG: --logging-format="text" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540152 4747 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540158 4747 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540164 4747 flags.go:64] FLAG: --manifest-url="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540170 4747 flags.go:64] FLAG: --manifest-url-header="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540179 4747 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540185 4747 flags.go:64] FLAG: --max-open-files="1000000" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540193 4747 flags.go:64] FLAG: --max-pods="110" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540200 4747 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540206 4747 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540213 4747 flags.go:64] FLAG: --memory-manager-policy="None" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540219 4747 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540225 4747 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540231 4747 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540238 4747 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540253 4747 flags.go:64] FLAG: --node-status-max-images="50" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540259 4747 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540265 4747 flags.go:64] FLAG: --oom-score-adj="-999" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540272 4747 flags.go:64] FLAG: --pod-cidr="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540278 4747 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540287 4747 flags.go:64] FLAG: --pod-manifest-path="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540293 4747 flags.go:64] FLAG: --pod-max-pids="-1" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540300 4747 flags.go:64] FLAG: --pods-per-core="0" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540308 4747 flags.go:64] FLAG: --port="10250" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540314 4747 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540321 4747 flags.go:64] FLAG: --provider-id="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540327 4747 flags.go:64] FLAG: --qos-reserved="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540334 4747 flags.go:64] FLAG: --read-only-port="10255" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540340 4747 flags.go:64] FLAG: --register-node="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540346 4747 flags.go:64] FLAG: --register-schedulable="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540353 4747 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540364 4747 flags.go:64] FLAG: --registry-burst="10" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540370 4747 flags.go:64] FLAG: --registry-qps="5" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540377 4747 flags.go:64] FLAG: --reserved-cpus="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540384 4747 flags.go:64] FLAG: --reserved-memory="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540392 4747 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540399 4747 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540405 4747 flags.go:64] FLAG: --rotate-certificates="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540411 4747 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540417 4747 flags.go:64] FLAG: --runonce="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540423 4747 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540430 4747 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540436 4747 flags.go:64] FLAG: --seccomp-default="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540442 4747 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540448 4747 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540454 4747 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540460 4747 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540467 4747 flags.go:64] FLAG: --storage-driver-password="root" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540473 4747 flags.go:64] FLAG: --storage-driver-secure="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540479 4747 flags.go:64] FLAG: --storage-driver-table="stats" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540485 4747 flags.go:64] FLAG: --storage-driver-user="root" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540491 4747 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540498 4747 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540504 4747 flags.go:64] FLAG: --system-cgroups="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540511 4747 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540522 4747 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540529 4747 flags.go:64] FLAG: --tls-cert-file="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540536 4747 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540544 4747 flags.go:64] FLAG: --tls-min-version="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540551 4747 flags.go:64] FLAG: --tls-private-key-file="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540557 4747 flags.go:64] FLAG: --topology-manager-policy="none" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540564 4747 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540570 4747 flags.go:64] FLAG: --topology-manager-scope="container" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540577 4747 flags.go:64] FLAG: --v="2" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540584 4747 flags.go:64] FLAG: --version="false" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540593 4747 flags.go:64] FLAG: --vmodule="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540600 4747 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.540607 4747 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540763 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540772 4747 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540780 4747 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540787 4747 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540794 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540801 4747 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540807 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540814 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540820 4747 feature_gate.go:330] unrecognized feature gate: Example Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540826 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540832 4747 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540838 4747 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540846 4747 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540853 4747 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540860 4747 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540865 4747 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540871 4747 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540877 4747 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540883 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540890 4747 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540896 4747 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540903 4747 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540909 4747 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540915 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540921 4747 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540927 4747 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540934 4747 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540940 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540947 4747 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540952 4747 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540959 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540964 4747 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540971 4747 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540978 4747 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540984 4747 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540990 4747 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.540995 4747 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541001 4747 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541007 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541013 4747 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541019 4747 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541025 4747 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541031 4747 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541037 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541042 4747 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541048 4747 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541057 4747 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541077 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541083 4747 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541088 4747 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541094 4747 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541101 4747 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541106 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541112 4747 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541118 4747 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541123 4747 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541130 4747 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541136 4747 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541142 4747 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541148 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541153 4747 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541159 4747 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541164 4747 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541168 4747 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541173 4747 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541180 4747 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541186 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541192 4747 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541199 4747 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541204 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.541209 4747 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.542117 4747 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.553024 4747 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.553089 4747 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553209 4747 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553222 4747 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553228 4747 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553235 4747 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553243 4747 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553249 4747 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553255 4747 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553261 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553268 4747 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553274 4747 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553281 4747 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553288 4747 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553293 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553297 4747 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553301 4747 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553306 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553310 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553314 4747 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553318 4747 feature_gate.go:330] unrecognized feature gate: Example Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553322 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553326 4747 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553331 4747 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553335 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553340 4747 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553344 4747 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553348 4747 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553353 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553357 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553364 4747 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553369 4747 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553373 4747 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553378 4747 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553382 4747 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553387 4747 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553392 4747 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553396 4747 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553400 4747 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553405 4747 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553409 4747 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553415 4747 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553421 4747 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553428 4747 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553434 4747 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553439 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553443 4747 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553448 4747 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553453 4747 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553458 4747 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553462 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553466 4747 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553470 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553475 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553479 4747 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553484 4747 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553488 4747 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553494 4747 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553500 4747 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553505 4747 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553510 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553515 4747 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553521 4747 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553526 4747 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553530 4747 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553536 4747 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553543 4747 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553548 4747 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553553 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553558 4747 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553562 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553567 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553572 4747 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.553581 4747 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553801 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553813 4747 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553819 4747 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553826 4747 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553839 4747 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553844 4747 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553849 4747 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553855 4747 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553859 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553864 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553870 4747 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553875 4747 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553880 4747 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553885 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553890 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553894 4747 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553899 4747 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553904 4747 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553909 4747 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553915 4747 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553919 4747 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553924 4747 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553930 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553935 4747 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553940 4747 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553945 4747 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553949 4747 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553954 4747 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553961 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553966 4747 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553971 4747 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553976 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553981 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553985 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553990 4747 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.553996 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554001 4747 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554007 4747 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554027 4747 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554032 4747 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554038 4747 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554043 4747 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554048 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554057 4747 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554078 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554083 4747 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554089 4747 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554094 4747 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554102 4747 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554109 4747 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554116 4747 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554124 4747 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554131 4747 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554137 4747 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554142 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554149 4747 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554155 4747 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554160 4747 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554165 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554171 4747 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554175 4747 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554180 4747 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554185 4747 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554190 4747 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554197 4747 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554202 4747 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554207 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554213 4747 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554220 4747 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554226 4747 feature_gate.go:330] unrecognized feature gate: Example Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.554231 4747 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.554240 4747 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.555524 4747 server.go:940] "Client rotation is on, will bootstrap in background" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.560023 4747 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.561163 4747 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.562762 4747 server.go:997] "Starting client certificate rotation" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.562800 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.563929 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-07 18:01:56.055381623 +0000 UTC Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.564044 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.594447 4747 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.597152 4747 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.598449 4747 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.612116 4747 log.go:25] "Validated CRI v1 runtime API" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.654149 4747 log.go:25] "Validated CRI v1 image API" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.656698 4747 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.661672 4747 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-26-13-10-43-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.661749 4747 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.691467 4747 manager.go:217] Machine: {Timestamp:2025-11-26 13:15:13.688190004 +0000 UTC m=+0.674501079 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:06628e42-f6c2-406a-9cb1-13512d1e2a59 BootID:43405111-f666-4269-b245-6c0668a7ae21 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c7:32:89 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c7:32:89 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:5d:f1:d6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c4:a5:02 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1f:2d:2d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:cb:4d:8b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:5e:34:ec:1d:fb:50 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1e:32:b8:48:a9:fb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.691940 4747 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.692131 4747 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.694396 4747 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.694700 4747 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.694755 4747 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.695131 4747 topology_manager.go:138] "Creating topology manager with none policy" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.695152 4747 container_manager_linux.go:303] "Creating device plugin manager" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.695637 4747 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.695690 4747 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.696026 4747 state_mem.go:36] "Initialized new in-memory state store" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.696187 4747 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.699393 4747 kubelet.go:418] "Attempting to sync node with API server" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.699428 4747 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.699466 4747 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.699487 4747 kubelet.go:324] "Adding apiserver pod source" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.699506 4747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.703413 4747 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.704814 4747 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.706793 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.706812 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.706912 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.706947 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.707226 4747 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709014 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709093 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709109 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709124 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709147 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709161 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709176 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709198 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709241 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709270 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709291 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.709305 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.710437 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.711364 4747 server.go:1280] "Started kubelet" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.711734 4747 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 26 13:15:13 crc systemd[1]: Started Kubernetes Kubelet. Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.723268 4747 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.723366 4747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.724516 4747 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.727626 4747 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.727810 4747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.727862 4747 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:40:43.834274282 +0000 UTC Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.727931 4747 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1274h25m30.106348128s for next certificate rotation Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.728194 4747 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.726739 4747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.142:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b90d2b5ae0fb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-26 13:15:13.711296436 +0000 UTC m=+0.697607491,LastTimestamp:2025-11-26 13:15:13.711296436 +0000 UTC m=+0.697607491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.728221 4747 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.728236 4747 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.728483 4747 server.go:460] "Adding debug handlers to kubelet server" Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.728747 4747 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.728878 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.729110 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.730028 4747 factory.go:55] Registering systemd factory Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.730107 4747 factory.go:221] Registration of the systemd container factory successfully Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.730152 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="200ms" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.730476 4747 factory.go:153] Registering CRI-O factory Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.730523 4747 factory.go:221] Registration of the crio container factory successfully Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.730641 4747 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.730682 4747 factory.go:103] Registering Raw factory Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.730714 4747 manager.go:1196] Started watching for new ooms in manager Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.731883 4747 manager.go:319] Starting recovery of all containers Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744631 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744726 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744765 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744787 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744807 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744833 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744852 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744874 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744915 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744937 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744956 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.744976 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745001 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745032 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745143 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745175 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745195 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745215 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745236 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745264 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745297 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745320 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745351 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745373 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745401 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745430 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745454 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745476 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745496 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745515 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745538 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745560 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745581 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745601 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745620 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745641 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745669 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745692 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745777 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745799 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745818 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745838 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745866 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745897 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745914 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745939 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745961 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.745981 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746004 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746031 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746090 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746111 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746137 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746158 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746181 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746205 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746227 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746245 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746264 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746281 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746302 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746329 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746349 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746373 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746390 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746412 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746435 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746462 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746503 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746524 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746543 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746563 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746590 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746608 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746627 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746647 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746667 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746687 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746707 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746728 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746747 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746766 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746787 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746815 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746843 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746862 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746887 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746906 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746929 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746947 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746967 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.746999 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747037 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747098 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747119 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747143 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747170 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747196 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747223 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747250 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747271 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747300 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747322 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747359 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747387 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747407 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747437 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747466 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747499 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747520 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747540 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747560 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747582 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747603 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747624 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747644 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747663 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747681 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747699 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747722 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747741 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747760 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747780 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747805 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747823 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747843 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747861 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747897 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747917 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.747935 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750424 4747 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750480 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750504 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750522 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750541 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750561 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750579 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750599 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750619 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750636 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750656 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750674 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750691 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750712 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750730 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750749 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750768 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750787 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750810 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750828 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750845 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750864 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750882 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750901 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750932 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750950 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.750969 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751004 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751023 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751049 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751097 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751115 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751134 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751151 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751170 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751190 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751208 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751224 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751242 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751260 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751282 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751299 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751317 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751335 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751352 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751372 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751391 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751408 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751427 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751445 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751463 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751480 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751499 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751517 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751535 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751553 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751570 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751588 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751609 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751628 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.751647 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752288 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752327 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752376 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752401 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752438 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752462 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752488 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752522 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752761 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752805 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752835 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752877 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752901 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752942 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.752971 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.753000 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.753039 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.753085 4747 reconstruct.go:97] "Volume reconstruction finished" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.753101 4747 reconciler.go:26] "Reconciler: start to sync state" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.769781 4747 manager.go:324] Recovery completed Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.782315 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.785277 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.785366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.785408 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.786978 4747 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.787004 4747 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.787027 4747 state_mem.go:36] "Initialized new in-memory state store" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.794816 4747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.794901 4747 policy_none.go:49] "None policy: Start" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.796035 4747 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.796101 4747 state_mem.go:35] "Initializing new in-memory state store" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.796978 4747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.797019 4747 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.797046 4747 kubelet.go:2335] "Starting kubelet main sync loop" Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.797104 4747 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 26 13:15:13 crc kubenswrapper[4747]: W1126 13:15:13.797641 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.797698 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.828980 4747 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.862610 4747 manager.go:334] "Starting Device Plugin manager" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.862704 4747 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.862720 4747 server.go:79] "Starting device plugin registration server" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.863231 4747 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.863248 4747 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.864764 4747 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.864949 4747 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.864965 4747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.873416 4747 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.897605 4747 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.897752 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.901327 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.901429 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.901451 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.901816 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.904137 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.904231 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.904585 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.904645 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.904750 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.904974 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.905308 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.905435 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.906440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.906493 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.906510 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.906686 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.906720 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.906741 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.906921 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.907035 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.907114 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.907310 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.907359 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.907384 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.908150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.908185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.908204 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.908409 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.908741 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.908799 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.908807 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.908859 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.908819 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.909746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.909801 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.909819 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.910101 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.910150 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.910720 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.910765 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.910790 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.912499 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.912531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.912547 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.931100 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="400ms" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.956536 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.956589 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.956623 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.956656 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.956786 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.956876 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.956950 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.957028 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.957079 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.957121 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.957150 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.957182 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.957211 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.957280 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.957333 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.963420 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.966762 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.966808 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.966828 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:13 crc kubenswrapper[4747]: I1126 13:15:13.966873 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:15:13 crc kubenswrapper[4747]: E1126 13:15:13.967432 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058466 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058517 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058536 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058553 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058576 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058591 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058606 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058622 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058641 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058636 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058677 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058694 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058690 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058723 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058728 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058658 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058680 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058658 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058787 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058800 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058841 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058787 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058821 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058912 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058950 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058850 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058983 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058958 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.059015 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.058989 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.167641 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.169517 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.169579 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.169604 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.169646 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:15:14 crc kubenswrapper[4747]: E1126 13:15:14.170255 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.229513 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.236695 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.255538 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.263117 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.267958 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:14 crc kubenswrapper[4747]: W1126 13:15:14.285559 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b8868018fb4628f8dad010b1edcc740cd8efa6bd7e1a448658cd0ecacfdf90c6 WatchSource:0}: Error finding container b8868018fb4628f8dad010b1edcc740cd8efa6bd7e1a448658cd0ecacfdf90c6: Status 404 returned error can't find the container with id b8868018fb4628f8dad010b1edcc740cd8efa6bd7e1a448658cd0ecacfdf90c6 Nov 26 13:15:14 crc kubenswrapper[4747]: W1126 13:15:14.289209 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e2f73bf4b26f80e05e044e137ca46a6a44301867072ffbe09441035050771ebe WatchSource:0}: Error finding container e2f73bf4b26f80e05e044e137ca46a6a44301867072ffbe09441035050771ebe: Status 404 returned error can't find the container with id e2f73bf4b26f80e05e044e137ca46a6a44301867072ffbe09441035050771ebe Nov 26 13:15:14 crc kubenswrapper[4747]: W1126 13:15:14.293533 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2a312b833ac6cb859eb80a76fb3df539fc65a77d6b27ee07ee43ef746438db35 WatchSource:0}: Error finding container 2a312b833ac6cb859eb80a76fb3df539fc65a77d6b27ee07ee43ef746438db35: Status 404 returned error can't find the container with id 2a312b833ac6cb859eb80a76fb3df539fc65a77d6b27ee07ee43ef746438db35 Nov 26 13:15:14 crc kubenswrapper[4747]: W1126 13:15:14.301274 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ea9ddecd8e17019b23329b200680bb3aa6d924b6e7bbde7b2597c1334d0d9e0b WatchSource:0}: Error finding container ea9ddecd8e17019b23329b200680bb3aa6d924b6e7bbde7b2597c1334d0d9e0b: Status 404 returned error can't find the container with id ea9ddecd8e17019b23329b200680bb3aa6d924b6e7bbde7b2597c1334d0d9e0b Nov 26 13:15:14 crc kubenswrapper[4747]: W1126 13:15:14.303177 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-5c037883e697a74ac9a327fc9456a558d84674940d3ab700c436b15a14cb987b WatchSource:0}: Error finding container 5c037883e697a74ac9a327fc9456a558d84674940d3ab700c436b15a14cb987b: Status 404 returned error can't find the container with id 5c037883e697a74ac9a327fc9456a558d84674940d3ab700c436b15a14cb987b Nov 26 13:15:14 crc kubenswrapper[4747]: E1126 13:15:14.332491 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="800ms" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.570341 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.572311 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.572348 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.572361 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.572386 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:15:14 crc kubenswrapper[4747]: E1126 13:15:14.572733 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Nov 26 13:15:14 crc kubenswrapper[4747]: W1126 13:15:14.596567 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:14 crc kubenswrapper[4747]: E1126 13:15:14.596711 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:14 crc kubenswrapper[4747]: W1126 13:15:14.603598 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:14 crc kubenswrapper[4747]: E1126 13:15:14.603682 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.724029 4747 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.802724 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b8868018fb4628f8dad010b1edcc740cd8efa6bd7e1a448658cd0ecacfdf90c6"} Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.803816 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5c037883e697a74ac9a327fc9456a558d84674940d3ab700c436b15a14cb987b"} Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.805871 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ea9ddecd8e17019b23329b200680bb3aa6d924b6e7bbde7b2597c1334d0d9e0b"} Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.806848 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2a312b833ac6cb859eb80a76fb3df539fc65a77d6b27ee07ee43ef746438db35"} Nov 26 13:15:14 crc kubenswrapper[4747]: I1126 13:15:14.808379 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e2f73bf4b26f80e05e044e137ca46a6a44301867072ffbe09441035050771ebe"} Nov 26 13:15:14 crc kubenswrapper[4747]: W1126 13:15:14.995582 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:14 crc kubenswrapper[4747]: E1126 13:15:14.995668 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:15 crc kubenswrapper[4747]: E1126 13:15:15.134316 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="1.6s" Nov 26 13:15:15 crc kubenswrapper[4747]: W1126 13:15:15.196591 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:15 crc kubenswrapper[4747]: E1126 13:15:15.196821 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.373472 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.375933 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.375991 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.376018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.376102 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:15:15 crc kubenswrapper[4747]: E1126 13:15:15.376884 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.724217 4747 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.771714 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 26 13:15:15 crc kubenswrapper[4747]: E1126 13:15:15.773407 4747 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.816498 4747 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9" exitCode=0 Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.816593 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9"} Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.816679 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.818529 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.818594 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.818615 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.819249 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08" exitCode=0 Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.819323 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08"} Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.819443 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.821226 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.821281 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.821312 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.823014 4747 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509" exitCode=0 Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.823127 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509"} Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.823255 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.824340 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.825742 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.825801 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.825803 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.825821 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.825874 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.825892 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.827019 4747 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b" exitCode=0 Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.827110 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b"} Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.827160 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.833689 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.833761 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.833779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:15 crc kubenswrapper[4747]: I1126 13:15:15.834512 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.724428 4747 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:16 crc kubenswrapper[4747]: E1126 13:15:16.735382 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="3.2s" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.842106 4747 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114" exitCode=0 Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.842175 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.842295 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.843372 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.843402 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.843412 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.845240 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.845297 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.845317 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.847114 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3ce4876cd230438c597828edab632e809c77ee13d7e9bb226953e86c84043555"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.847233 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.848403 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.848455 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.848477 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.855249 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.855283 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.855293 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.855388 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.864158 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.864194 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.864203 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.866328 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.866355 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.866365 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf"} Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.866422 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.867288 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.867315 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.867326 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:16 crc kubenswrapper[4747]: W1126 13:15:16.918928 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:16 crc kubenswrapper[4747]: E1126 13:15:16.919023 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.977731 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.979111 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.979165 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.979178 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:16 crc kubenswrapper[4747]: I1126 13:15:16.979210 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:15:16 crc kubenswrapper[4747]: E1126 13:15:16.979766 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Nov 26 13:15:17 crc kubenswrapper[4747]: W1126 13:15:17.047685 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Nov 26 13:15:17 crc kubenswrapper[4747]: E1126 13:15:17.047798 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.872639 4747 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd" exitCode=0 Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.872759 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd"} Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.872876 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.874240 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.874301 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.874330 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.878236 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58"} Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.878282 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.878299 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e"} Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.878354 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.878418 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.878417 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.878488 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.879779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.879841 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.879865 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.880878 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.880922 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.880942 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.881004 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.881039 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.881085 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.881892 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.881950 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:17 crc kubenswrapper[4747]: I1126 13:15:17.881969 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.887692 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5"} Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.887747 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d"} Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.887769 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c"} Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.887784 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.887906 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.888008 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.889875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.889932 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.889950 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.890133 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.890194 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:18 crc kubenswrapper[4747]: I1126 13:15:18.890251 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.803006 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.895494 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79"} Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.895531 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae"} Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.895605 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.896120 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.896737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.896760 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.896768 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.897220 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.897478 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.897660 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:19 crc kubenswrapper[4747]: I1126 13:15:19.935731 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.180516 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.182029 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.182137 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.182162 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.182197 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.218399 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.681359 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.681537 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.682642 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.682678 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.682690 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.816907 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.898934 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.898936 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.900034 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.900118 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.900140 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.900503 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.900601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:20 crc kubenswrapper[4747]: I1126 13:15:20.900627 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:21 crc kubenswrapper[4747]: I1126 13:15:21.901818 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:21 crc kubenswrapper[4747]: I1126 13:15:21.901840 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:21 crc kubenswrapper[4747]: I1126 13:15:21.903099 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:21 crc kubenswrapper[4747]: I1126 13:15:21.903132 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:21 crc kubenswrapper[4747]: I1126 13:15:21.903147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:21 crc kubenswrapper[4747]: I1126 13:15:21.908381 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:21 crc kubenswrapper[4747]: I1126 13:15:21.909099 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:21 crc kubenswrapper[4747]: I1126 13:15:21.909151 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.295004 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.295281 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.296542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.296598 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.296618 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.302277 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.905310 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.905476 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.906897 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.906957 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.906975 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:22 crc kubenswrapper[4747]: I1126 13:15:22.993936 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:23 crc kubenswrapper[4747]: E1126 13:15:23.873585 4747 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 26 13:15:23 crc kubenswrapper[4747]: I1126 13:15:23.907516 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:23 crc kubenswrapper[4747]: I1126 13:15:23.908810 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:23 crc kubenswrapper[4747]: I1126 13:15:23.908872 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:23 crc kubenswrapper[4747]: I1126 13:15:23.908892 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:24 crc kubenswrapper[4747]: I1126 13:15:24.165192 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 26 13:15:24 crc kubenswrapper[4747]: I1126 13:15:24.165439 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:24 crc kubenswrapper[4747]: I1126 13:15:24.167123 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:24 crc kubenswrapper[4747]: I1126 13:15:24.167225 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:24 crc kubenswrapper[4747]: I1126 13:15:24.167250 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:24 crc kubenswrapper[4747]: I1126 13:15:24.909843 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:24 crc kubenswrapper[4747]: I1126 13:15:24.911113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:24 crc kubenswrapper[4747]: I1126 13:15:24.911149 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:24 crc kubenswrapper[4747]: I1126 13:15:24.911161 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:25 crc kubenswrapper[4747]: I1126 13:15:25.994438 4747 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 26 13:15:25 crc kubenswrapper[4747]: I1126 13:15:25.994521 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 26 13:15:26 crc kubenswrapper[4747]: I1126 13:15:26.575498 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:26 crc kubenswrapper[4747]: I1126 13:15:26.575634 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:26 crc kubenswrapper[4747]: I1126 13:15:26.576859 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:26 crc kubenswrapper[4747]: I1126 13:15:26.576905 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:26 crc kubenswrapper[4747]: I1126 13:15:26.576923 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:27 crc kubenswrapper[4747]: W1126 13:15:27.505379 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 26 13:15:27 crc kubenswrapper[4747]: I1126 13:15:27.505577 4747 trace.go:236] Trace[1969255996]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 13:15:17.503) (total time: 10001ms): Nov 26 13:15:27 crc kubenswrapper[4747]: Trace[1969255996]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:15:27.505) Nov 26 13:15:27 crc kubenswrapper[4747]: Trace[1969255996]: [10.001785943s] [10.001785943s] END Nov 26 13:15:27 crc kubenswrapper[4747]: E1126 13:15:27.505619 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 26 13:15:27 crc kubenswrapper[4747]: W1126 13:15:27.542428 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 26 13:15:27 crc kubenswrapper[4747]: I1126 13:15:27.542666 4747 trace.go:236] Trace[791447766]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 13:15:17.541) (total time: 10000ms): Nov 26 13:15:27 crc kubenswrapper[4747]: Trace[791447766]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:15:27.542) Nov 26 13:15:27 crc kubenswrapper[4747]: Trace[791447766]: [10.000944172s] [10.000944172s] END Nov 26 13:15:27 crc kubenswrapper[4747]: E1126 13:15:27.542833 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 26 13:15:27 crc kubenswrapper[4747]: I1126 13:15:27.724681 4747 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 26 13:15:28 crc kubenswrapper[4747]: I1126 13:15:28.375777 4747 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 26 13:15:28 crc kubenswrapper[4747]: I1126 13:15:28.375840 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 26 13:15:28 crc kubenswrapper[4747]: I1126 13:15:28.394842 4747 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 26 13:15:28 crc kubenswrapper[4747]: I1126 13:15:28.394909 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 26 13:15:30 crc kubenswrapper[4747]: I1126 13:15:30.823298 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:30 crc kubenswrapper[4747]: I1126 13:15:30.823514 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:30 crc kubenswrapper[4747]: I1126 13:15:30.824780 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:30 crc kubenswrapper[4747]: I1126 13:15:30.824827 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:30 crc kubenswrapper[4747]: I1126 13:15:30.824846 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:30 crc kubenswrapper[4747]: I1126 13:15:30.829380 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:30 crc kubenswrapper[4747]: I1126 13:15:30.927010 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:30 crc kubenswrapper[4747]: I1126 13:15:30.928620 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:30 crc kubenswrapper[4747]: I1126 13:15:30.928701 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:30 crc kubenswrapper[4747]: I1126 13:15:30.928728 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.343670 4747 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.710027 4747 apiserver.go:52] "Watching apiserver" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.717178 4747 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.717653 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.718140 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.718313 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.718633 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.718682 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.718747 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.718753 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:15:31 crc kubenswrapper[4747]: E1126 13:15:31.719109 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:31 crc kubenswrapper[4747]: E1126 13:15:31.719188 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:31 crc kubenswrapper[4747]: E1126 13:15:31.719385 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.720968 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.721009 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.721007 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.723655 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.723990 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.723995 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.724254 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.724358 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.724802 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.729531 4747 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.758368 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.772801 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.788732 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.805293 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.819714 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.835806 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:31 crc kubenswrapper[4747]: I1126 13:15:31.847375 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:32 crc kubenswrapper[4747]: I1126 13:15:32.797325 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:32 crc kubenswrapper[4747]: E1126 13:15:32.797519 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:32 crc kubenswrapper[4747]: I1126 13:15:32.927032 4747 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.371384 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.376015 4747 trace.go:236] Trace[1855919205]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 13:15:21.477) (total time: 11898ms): Nov 26 13:15:33 crc kubenswrapper[4747]: Trace[1855919205]: ---"Objects listed" error: 11898ms (13:15:33.375) Nov 26 13:15:33 crc kubenswrapper[4747]: Trace[1855919205]: [11.89824948s] [11.89824948s] END Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.376079 4747 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.377892 4747 trace.go:236] Trace[161232043]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 13:15:22.222) (total time: 11155ms): Nov 26 13:15:33 crc kubenswrapper[4747]: Trace[161232043]: ---"Objects listed" error: 11155ms (13:15:33.377) Nov 26 13:15:33 crc kubenswrapper[4747]: Trace[161232043]: [11.155523348s] [11.155523348s] END Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.377932 4747 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.378626 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.380118 4747 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.384438 4747 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.416460 4747 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51520->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.416507 4747 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51524->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.416538 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51520->192.168.126.11:17697: read: connection reset by peer" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.416572 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51524->192.168.126.11:17697: read: connection reset by peer" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.417095 4747 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.417180 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.417334 4747 csr.go:261] certificate signing request csr-m6lrf is approved, waiting to be issued Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.435795 4747 csr.go:257] certificate signing request csr-m6lrf is issued Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.480733 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.480793 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.480813 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.480830 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.480847 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.480863 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.480878 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.480893 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481516 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481541 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481688 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481713 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481730 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481746 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481763 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481778 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481792 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481810 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481825 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481839 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481855 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481996 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482015 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482031 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482104 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482020 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482114 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482204 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482365 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482367 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482407 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.481869 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482614 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482631 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482647 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482661 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482675 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482688 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482703 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482719 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482734 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482747 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482762 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.482827 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.483134 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.483184 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.483232 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.483246 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.483461 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.483481 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484170 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484318 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484359 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484471 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484508 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484531 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484560 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484585 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484610 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484631 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484652 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484671 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484689 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484738 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484741 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484755 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484856 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484884 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.484929 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485128 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485180 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485209 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485230 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485250 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485269 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485288 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485313 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485337 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485356 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485377 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485396 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485415 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485435 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485457 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485476 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485495 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485514 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485533 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485553 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485572 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485593 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485612 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485635 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485654 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485672 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485692 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485712 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485733 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485755 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485775 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485794 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485813 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485835 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485178 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485186 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485255 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485396 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485566 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485595 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485815 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485850 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485717 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.486035 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.486145 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.486184 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.486200 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.486263 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.486360 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.485855 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.486988 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487038 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487133 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487176 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487209 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487236 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487253 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487244 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487277 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487297 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487319 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487335 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487354 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487370 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487385 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487498 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487509 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487579 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487516 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487641 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487678 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487701 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487722 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487743 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487764 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487784 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487787 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487804 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487827 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487847 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487868 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487891 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487911 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487930 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487951 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487972 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487994 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488015 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488035 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488078 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488097 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488152 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488173 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488193 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488214 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488234 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488256 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488277 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488297 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488319 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488340 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488364 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488383 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488409 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488430 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488450 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488469 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488493 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488512 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488533 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488554 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488573 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488594 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488614 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488634 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488655 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488678 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488701 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488721 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488742 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488763 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488796 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488818 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488839 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488859 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488879 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488900 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488919 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488939 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488959 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488979 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488999 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489020 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489041 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489086 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489110 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489132 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489153 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489175 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489198 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489220 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489241 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489261 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489282 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489303 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489324 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489349 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489372 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489393 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489416 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489439 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489464 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489488 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489510 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489567 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489591 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489613 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489635 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489657 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489680 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489701 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489723 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489745 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489767 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489788 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489808 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489829 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489851 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489871 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489896 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489922 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489944 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489965 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489987 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490008 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490030 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490069 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490114 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490140 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490162 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490188 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490217 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490240 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490264 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490286 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490309 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490332 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490356 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490379 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490405 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490426 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490478 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490493 4747 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490505 4747 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490518 4747 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490530 4747 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490543 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490555 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490567 4747 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490580 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490593 4747 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490605 4747 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490778 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490793 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490807 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490820 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490846 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490858 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490871 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490883 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490896 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490908 4747 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490920 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490933 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490946 4747 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490959 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490970 4747 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490983 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490996 4747 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491008 4747 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491020 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491033 4747 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491044 4747 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491072 4747 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491083 4747 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491095 4747 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491108 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491120 4747 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491132 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491145 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491158 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491170 4747 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491183 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491194 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491207 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491220 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491234 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491247 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491258 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491270 4747 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491282 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491301 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487964 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.487991 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488262 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488280 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488284 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488306 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488591 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488818 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.488855 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489014 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489145 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489163 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489369 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489391 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489401 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489541 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489468 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489670 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489692 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489862 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.489827 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490278 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490244 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490524 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490525 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.490984 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491468 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.491825 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.492276 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.492293 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.492280 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.492467 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.492654 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.492824 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.493106 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.493650 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.493157 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.493368 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.493851 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.493869 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.494043 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.494313 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.494385 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.494560 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.494647 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.494715 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.494765 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495013 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495116 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495198 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495280 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495342 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495466 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495493 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495538 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495620 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495804 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.495834 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.496133 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.496298 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.496414 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.496568 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.496749 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.496983 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.497129 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.497386 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.497440 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.497471 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.497516 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.497798 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.497865 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.494923 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.497977 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.498152 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.498175 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.492829 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.498283 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:15:33.998264935 +0000 UTC m=+20.984575940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.498571 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.498671 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.498694 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.498786 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.498954 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.499096 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.499383 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.499427 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.499662 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.499725 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.499734 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.499911 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.494822 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.500127 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.500206 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.500437 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.500652 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.500843 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.500893 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.500892 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.501125 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.501536 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.502113 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.502141 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.502580 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.502634 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.503145 4747 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.503760 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.503824 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:34.003806773 +0000 UTC m=+20.990117798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.503889 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.503923 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:34.003914226 +0000 UTC m=+20.990225251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.504274 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.505356 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.505769 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.507090 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.507229 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.507465 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.510234 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.511515 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.511659 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.514532 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.515046 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.515545 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.515756 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.516425 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.516522 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.517396 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.517554 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.517578 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.517878 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.519628 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.519654 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.519912 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.519625 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.520895 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.520915 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.521309 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.521402 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.521550 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.521797 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.522720 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.522975 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.525324 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.525347 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.525366 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.525412 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:34.025397352 +0000 UTC m=+21.011708367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.526091 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.526116 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.526131 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.526189 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:34.026171681 +0000 UTC m=+21.012482796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.526325 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.528275 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.528591 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.529214 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.529581 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.529966 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.530353 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.531044 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.532443 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.532693 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.533201 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.533309 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.536171 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.536370 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.536517 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.536634 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.544420 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.551813 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.553343 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.564653 4747 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 26 13:15:33 crc kubenswrapper[4747]: W1126 13:15:33.566469 4747 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 26 13:15:33 crc kubenswrapper[4747]: W1126 13:15:33.567070 4747 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 26 13:15:33 crc kubenswrapper[4747]: W1126 13:15:33.567994 4747 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.577822 4747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.142:45048->38.102.83.142:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187b90d2d89cbbc9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-26 13:15:14.297363401 +0000 UTC m=+1.283674466,LastTimestamp:2025-11-26 13:15:14.297363401 +0000 UTC m=+1.283674466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592387 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592451 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592498 4747 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592511 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592524 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592536 4747 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592548 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592560 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592572 4747 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592583 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592594 4747 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592605 4747 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592611 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592612 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592699 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592717 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592734 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592749 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592765 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592781 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592820 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592832 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592844 4747 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592855 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592866 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592878 4747 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592888 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592899 4747 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592910 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592922 4747 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592934 4747 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592944 4747 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592955 4747 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592965 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592976 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.592988 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593000 4747 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593013 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593025 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593038 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593049 4747 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593095 4747 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593106 4747 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593117 4747 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593130 4747 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593140 4747 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593151 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593161 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593172 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593182 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593193 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593205 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593216 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593227 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593238 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593255 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593265 4747 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593276 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593288 4747 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593298 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593312 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593323 4747 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593334 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593344 4747 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593355 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593366 4747 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593378 4747 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593388 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593399 4747 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593409 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593420 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593431 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593442 4747 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593452 4747 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593462 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593473 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593484 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593495 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593505 4747 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593515 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593526 4747 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593537 4747 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593548 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593558 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593569 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593580 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593596 4747 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593611 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593624 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593639 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593653 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593665 4747 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593676 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593689 4747 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593703 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593718 4747 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593731 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593744 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593760 4747 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593774 4747 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593788 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593802 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593816 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593832 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593848 4747 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593862 4747 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593878 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593893 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593906 4747 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593920 4747 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593934 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593949 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593963 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593976 4747 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.593990 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594004 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594017 4747 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594033 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594047 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594085 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594100 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594115 4747 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594130 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594145 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594162 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594178 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594193 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594205 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594218 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594231 4747 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594248 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594262 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594277 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594292 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594305 4747 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594319 4747 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594332 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594346 4747 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594360 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594374 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594389 4747 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594403 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594419 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594436 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594451 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594467 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.594480 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.798114 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.798114 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.798355 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:33 crc kubenswrapper[4747]: E1126 13:15:33.798257 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.804319 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.805026 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.806558 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.807412 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.808653 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.809301 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.810039 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.811246 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.812003 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.813655 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.814285 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.815596 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.816313 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.816629 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.816979 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.818131 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.818792 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.819982 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.820545 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.821339 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.822594 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.827529 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.828374 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.828937 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.830123 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.830492 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.831456 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.832025 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.832255 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.832877 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.833482 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.834249 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.834676 4747 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.834770 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.836326 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.837595 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.838022 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.838160 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.839528 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.840533 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.841074 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.842082 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.842730 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.843167 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.844214 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.845165 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.845759 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.846582 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.847100 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.848078 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.848766 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.849648 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: W1126 13:15:33.849668 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-368e75969e97c2f8d198aad2e3e8bf8448595c1ed616bf0be7c787c0e0dec1ac WatchSource:0}: Error finding container 368e75969e97c2f8d198aad2e3e8bf8448595c1ed616bf0be7c787c0e0dec1ac: Status 404 returned error can't find the container with id 368e75969e97c2f8d198aad2e3e8bf8448595c1ed616bf0be7c787c0e0dec1ac Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.850142 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.850560 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.851375 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.851497 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.852165 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.852610 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.865458 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.867974 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.890341 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.906114 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.936881 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"368e75969e97c2f8d198aad2e3e8bf8448595c1ed616bf0be7c787c0e0dec1ac"} Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.938771 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193"} Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.938799 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f"} Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.938808 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3494f7a0c2cb65efb3430436ff4e8f7dcf006652b753f2b0b154c39f16d99699"} Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.940125 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.942289 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58" exitCode=255 Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.942354 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58"} Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.943078 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fd47e9cfa1380abdab32630a4ca3435c50dcebbfbff3329d6dfaca77a69c342d"} Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.951104 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.957441 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.957605 4747 scope.go:117] "RemoveContainer" containerID="a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.962910 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.982881 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:33 crc kubenswrapper[4747]: I1126 13:15:33.993310 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.004003 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.016285 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.037835 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.059315 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.076251 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.093113 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.098533 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.098590 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.098612 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.098631 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.098654 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.098732 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.098746 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.098765 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.098774 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.098802 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:35.098782043 +0000 UTC m=+22.085093058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.098819 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:35.098812713 +0000 UTC m=+22.085123728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.098878 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.098904 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:35.098897366 +0000 UTC m=+22.085208381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.098933 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.098986 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.099008 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.099166 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:35.099146042 +0000 UTC m=+22.085457057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.099312 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:15:35.099301096 +0000 UTC m=+22.085612111 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.105028 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.113990 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.121544 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.198144 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.206910 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.212096 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.212414 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.216835 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.225938 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.232768 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.241468 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.249840 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.257417 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.261998 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.267192 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.267863 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.276583 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.291676 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.303472 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.311588 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.314662 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.328665 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.341552 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.357859 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.378290 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.393182 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.405932 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.423953 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.434563 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.436968 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-26 13:10:33 +0000 UTC, rotation deadline is 2026-10-13 05:56:00.175253651 +0000 UTC Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.437038 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7696h40m25.738222349s for next certificate rotation Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.447581 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.466171 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.488784 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.513107 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.797919 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:34 crc kubenswrapper[4747]: E1126 13:15:34.798075 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.947104 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63"} Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.948611 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.950310 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d"} Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.958569 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.973298 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:34 crc kubenswrapper[4747]: I1126 13:15:34.998168 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.019581 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.040365 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.057496 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.069329 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.083258 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.101545 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.104807 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.104882 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.104906 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.104930 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.104961 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:15:37.104937779 +0000 UTC m=+24.091248794 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.105022 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105102 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105116 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105112 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105150 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105152 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:37.105145915 +0000 UTC m=+24.091456930 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105164 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105170 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:37.105164015 +0000 UTC m=+24.091475030 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105215 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:37.105200976 +0000 UTC m=+24.091511991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105112 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105248 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105262 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.105315 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:37.105296138 +0000 UTC m=+24.091607223 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.115537 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.134882 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.161505 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.241557 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.270248 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.270318 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.283516 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.297850 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.304020 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hjc55"] Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.304396 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.304486 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lb7jc"] Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.304748 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-p296l"] Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.304883 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.304956 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p296l" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.307150 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.307231 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.307298 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.307345 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 26 13:15:35 crc kubenswrapper[4747]: W1126 13:15:35.307406 4747 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.307430 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.307436 4747 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.307504 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.307519 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.307619 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.307926 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.308041 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 26 13:15:35 crc kubenswrapper[4747]: W1126 13:15:35.308208 4747 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.308243 4747 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.321733 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: W1126 13:15:35.322756 4747 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.322787 4747 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.337310 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.352763 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.374343 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.389465 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.405098 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407426 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-cni-binary-copy\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407455 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5f152815-d3e9-4250-9427-94f851c10579-hosts-file\") pod \"node-resolver-p296l\" (UID: \"5f152815-d3e9-4250-9427-94f851c10579\") " pod="openshift-dns/node-resolver-p296l" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407478 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b021e3b3-27be-4500-8dae-e5cd31ba8405-rootfs\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407499 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-system-cni-dir\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407522 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-var-lib-cni-multus\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407559 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b021e3b3-27be-4500-8dae-e5cd31ba8405-mcd-auth-proxy-config\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407582 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-conf-dir\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407606 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnj4\" (UniqueName: \"kubernetes.io/projected/b021e3b3-27be-4500-8dae-e5cd31ba8405-kube-api-access-9wnj4\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407627 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scb6k\" (UniqueName: \"kubernetes.io/projected/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-kube-api-access-scb6k\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407659 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-socket-dir-parent\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407716 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-run-netns\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407788 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-hostroot\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407826 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npngj\" (UniqueName: \"kubernetes.io/projected/5f152815-d3e9-4250-9427-94f851c10579-kube-api-access-npngj\") pod \"node-resolver-p296l\" (UID: \"5f152815-d3e9-4250-9427-94f851c10579\") " pod="openshift-dns/node-resolver-p296l" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407849 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-cnibin\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407919 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b021e3b3-27be-4500-8dae-e5cd31ba8405-proxy-tls\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407941 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-run-k8s-cni-cncf-io\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407961 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-run-multus-certs\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.407983 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-var-lib-cni-bin\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.408015 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-cni-dir\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.408037 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-os-release\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.408088 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-daemon-config\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.408113 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-etc-kubernetes\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.408134 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-var-lib-kubelet\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.415997 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.428233 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.444325 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.462764 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.476502 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.489386 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.505518 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508678 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnj4\" (UniqueName: \"kubernetes.io/projected/b021e3b3-27be-4500-8dae-e5cd31ba8405-kube-api-access-9wnj4\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508729 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-hostroot\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508753 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scb6k\" (UniqueName: \"kubernetes.io/projected/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-kube-api-access-scb6k\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508782 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-socket-dir-parent\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508802 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-run-netns\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508827 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npngj\" (UniqueName: \"kubernetes.io/projected/5f152815-d3e9-4250-9427-94f851c10579-kube-api-access-npngj\") pod \"node-resolver-p296l\" (UID: \"5f152815-d3e9-4250-9427-94f851c10579\") " pod="openshift-dns/node-resolver-p296l" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508849 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b021e3b3-27be-4500-8dae-e5cd31ba8405-proxy-tls\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508868 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-cnibin\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508881 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-socket-dir-parent\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508896 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-run-k8s-cni-cncf-io\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508822 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-hostroot\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508920 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-run-multus-certs\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508950 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-os-release\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508972 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-var-lib-cni-bin\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508996 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-cni-dir\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509026 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-daemon-config\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509043 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-etc-kubernetes\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509085 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-var-lib-kubelet\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509104 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-cni-binary-copy\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509120 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5f152815-d3e9-4250-9427-94f851c10579-hosts-file\") pod \"node-resolver-p296l\" (UID: \"5f152815-d3e9-4250-9427-94f851c10579\") " pod="openshift-dns/node-resolver-p296l" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509139 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b021e3b3-27be-4500-8dae-e5cd31ba8405-rootfs\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509157 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-system-cni-dir\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509176 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-var-lib-cni-multus\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509202 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b021e3b3-27be-4500-8dae-e5cd31ba8405-mcd-auth-proxy-config\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509217 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-conf-dir\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509218 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-cnibin\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509256 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-conf-dir\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509262 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-etc-kubernetes\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509294 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-var-lib-kubelet\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509332 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-run-netns\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.508955 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-run-multus-certs\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509691 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-var-lib-cni-bin\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509822 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-cni-dir\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509872 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-os-release\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509924 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5f152815-d3e9-4250-9427-94f851c10579-hosts-file\") pod \"node-resolver-p296l\" (UID: \"5f152815-d3e9-4250-9427-94f851c10579\") " pod="openshift-dns/node-resolver-p296l" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509976 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b021e3b3-27be-4500-8dae-e5cd31ba8405-rootfs\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.509944 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-system-cni-dir\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.510030 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-run-k8s-cni-cncf-io\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.510048 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-host-var-lib-cni-multus\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.510220 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-multus-daemon-config\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.510363 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-cni-binary-copy\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.510757 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b021e3b3-27be-4500-8dae-e5cd31ba8405-mcd-auth-proxy-config\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.517894 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b021e3b3-27be-4500-8dae-e5cd31ba8405-proxy-tls\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.523626 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.524898 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scb6k\" (UniqueName: \"kubernetes.io/projected/aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1-kube-api-access-scb6k\") pod \"multus-lb7jc\" (UID: \"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\") " pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.530583 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnj4\" (UniqueName: \"kubernetes.io/projected/b021e3b3-27be-4500-8dae-e5cd31ba8405-kube-api-access-9wnj4\") pod \"machine-config-daemon-hjc55\" (UID: \"b021e3b3-27be-4500-8dae-e5cd31ba8405\") " pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.615566 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.622403 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lb7jc" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.672678 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-75p22"] Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.673393 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m4wml"] Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.674046 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.674230 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.677195 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.677356 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.677532 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.677698 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.677798 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.677898 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.677991 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.678172 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.678801 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.701718 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.718243 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.730092 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.741345 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.770632 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.784399 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.798385 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.798494 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.798645 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:35 crc kubenswrapper[4747]: E1126 13:15:35.798731 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.799588 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811485 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-os-release\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811521 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-cni-binary-copy\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811540 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811556 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-kubelet\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811576 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-openvswitch\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811590 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-netns\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811606 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-etc-openvswitch\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811620 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-env-overrides\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811634 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cm5m\" (UniqueName: \"kubernetes.io/projected/59482207-ba7e-4b71-a40b-968d8e3dcb8b-kube-api-access-2cm5m\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811648 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811663 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovn-node-metrics-cert\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811680 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-node-log\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811693 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-config\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811727 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-cnibin\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811741 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-systemd-units\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811755 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-slash\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811771 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811785 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-log-socket\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811799 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-script-lib\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811819 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-ovn\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811834 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811851 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj4z5\" (UniqueName: \"kubernetes.io/projected/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-kube-api-access-kj4z5\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811866 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-system-cni-dir\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811880 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-systemd\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811893 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-var-lib-openvswitch\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811909 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-bin\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.811931 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-netd\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.812302 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.825979 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.841365 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.855185 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.871542 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.889007 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.910943 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913267 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-config\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913314 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-node-log\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913338 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-cnibin\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913362 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-systemd-units\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913375 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-slash\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913388 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-log-socket\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913402 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-script-lib\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913410 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-node-log\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913418 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913484 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-ovn\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913515 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913543 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj4z5\" (UniqueName: \"kubernetes.io/projected/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-kube-api-access-kj4z5\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913550 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-slash\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913568 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-system-cni-dir\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913594 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-systemd\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913616 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-cnibin\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913615 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-var-lib-openvswitch\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913652 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-systemd-units\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913648 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-bin\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913680 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-bin\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913708 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-netd\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913715 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-ovn\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913750 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913751 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-os-release\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913791 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-cni-binary-copy\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913806 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-os-release\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913812 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913843 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-kubelet\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913875 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-netns\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913884 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913901 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-etc-openvswitch\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913926 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-openvswitch\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913953 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.913977 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-env-overrides\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914001 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cm5m\" (UniqueName: \"kubernetes.io/projected/59482207-ba7e-4b71-a40b-968d8e3dcb8b-kube-api-access-2cm5m\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914030 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovn-node-metrics-cert\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914079 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-config\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914121 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914142 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-etc-openvswitch\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914122 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-log-socket\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914180 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-netd\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914183 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-system-cni-dir\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914205 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-kubelet\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914223 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-systemd\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914230 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-netns\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914259 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-var-lib-openvswitch\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914296 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914326 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-openvswitch\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914614 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-script-lib\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914688 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-cni-binary-copy\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.914710 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-env-overrides\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.918951 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovn-node-metrics-cert\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.930264 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj4z5\" (UniqueName: \"kubernetes.io/projected/405692d3-ec7c-4ebe-8d8f-d89f0de8a62a-kube-api-access-kj4z5\") pod \"multus-additional-cni-plugins-75p22\" (UID: \"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\") " pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.930263 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.935536 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cm5m\" (UniqueName: \"kubernetes.io/projected/59482207-ba7e-4b71-a40b-968d8e3dcb8b-kube-api-access-2cm5m\") pod \"ovnkube-node-m4wml\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.953710 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7"} Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.953784 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023"} Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.953847 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"89bdd3574ad2d906d0dad31ee2ebfb94907395304161ad954a7bfb3e018f9814"} Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.955889 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lb7jc" event={"ID":"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1","Type":"ContainerStarted","Data":"eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d"} Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.955933 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lb7jc" event={"ID":"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1","Type":"ContainerStarted","Data":"5d9f9720ca7e23c41c3c20ba613a9c6c969031eb47962ce035936b79717b7e94"} Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.968695 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:35Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:35 crc kubenswrapper[4747]: I1126 13:15:35.994381 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:36 crc kubenswrapper[4747]: W1126 13:15:36.006355 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59482207_ba7e_4b71_a40b_968d8e3dcb8b.slice/crio-f5edc436c802175ca9a07986cfa7354a14a57e55f5eca04d428de004a0b70ba7 WatchSource:0}: Error finding container f5edc436c802175ca9a07986cfa7354a14a57e55f5eca04d428de004a0b70ba7: Status 404 returned error can't find the container with id f5edc436c802175ca9a07986cfa7354a14a57e55f5eca04d428de004a0b70ba7 Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.010023 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.015313 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-75p22" Nov 26 13:15:36 crc kubenswrapper[4747]: W1126 13:15:36.029254 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod405692d3_ec7c_4ebe_8d8f_d89f0de8a62a.slice/crio-db8615feb4c7b5764bc52ca72a59d754c394b0d59e469013b5b7a9de47ffc315 WatchSource:0}: Error finding container db8615feb4c7b5764bc52ca72a59d754c394b0d59e469013b5b7a9de47ffc315: Status 404 returned error can't find the container with id db8615feb4c7b5764bc52ca72a59d754c394b0d59e469013b5b7a9de47ffc315 Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.050035 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.093433 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.131559 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.161215 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.200987 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.228067 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.269647 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.312541 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.348707 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.388632 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.400758 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.449549 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.460252 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.467286 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npngj\" (UniqueName: \"kubernetes.io/projected/5f152815-d3e9-4250-9427-94f851c10579-kube-api-access-npngj\") pod \"node-resolver-p296l\" (UID: \"5f152815-d3e9-4250-9427-94f851c10579\") " pod="openshift-dns/node-resolver-p296l" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.528865 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p296l" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.529726 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: W1126 13:15:36.545243 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f152815_d3e9_4250_9427_94f851c10579.slice/crio-25839f298570afe9eede435c29dd67eecdce01f565d4cc3eefeac096249db3b0 WatchSource:0}: Error finding container 25839f298570afe9eede435c29dd67eecdce01f565d4cc3eefeac096249db3b0: Status 404 returned error can't find the container with id 25839f298570afe9eede435c29dd67eecdce01f565d4cc3eefeac096249db3b0 Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.558012 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.590714 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.627579 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.668116 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.714836 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.752498 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.792567 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.797604 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:36 crc kubenswrapper[4747]: E1126 13:15:36.797735 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.830623 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.870803 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.911142 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.954299 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.962607 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12" exitCode=0 Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.962690 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12"} Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.962753 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"f5edc436c802175ca9a07986cfa7354a14a57e55f5eca04d428de004a0b70ba7"} Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.964403 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5"} Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.966364 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p296l" event={"ID":"5f152815-d3e9-4250-9427-94f851c10579","Type":"ContainerStarted","Data":"76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c"} Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.966412 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p296l" event={"ID":"5f152815-d3e9-4250-9427-94f851c10579","Type":"ContainerStarted","Data":"25839f298570afe9eede435c29dd67eecdce01f565d4cc3eefeac096249db3b0"} Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.968203 4747 generic.go:334] "Generic (PLEG): container finished" podID="405692d3-ec7c-4ebe-8d8f-d89f0de8a62a" containerID="09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649" exitCode=0 Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.968239 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" event={"ID":"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a","Type":"ContainerDied","Data":"09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649"} Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.968276 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" event={"ID":"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a","Type":"ContainerStarted","Data":"db8615feb4c7b5764bc52ca72a59d754c394b0d59e469013b5b7a9de47ffc315"} Nov 26 13:15:36 crc kubenswrapper[4747]: I1126 13:15:36.991908 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:36Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.030900 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.073769 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.108543 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.127040 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127235 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:15:41.127203093 +0000 UTC m=+28.113514118 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.127435 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.127475 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.127515 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.127543 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127585 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127613 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127674 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:41.127656884 +0000 UTC m=+28.113967899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127619 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127702 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127702 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127730 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:41.127722956 +0000 UTC m=+28.114034081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127734 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127740 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127823 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:41.127802408 +0000 UTC m=+28.114113443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127748 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.127887 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:41.12787891 +0000 UTC m=+28.114189935 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.151655 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.203755 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.229831 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.271722 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.308724 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.352737 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.405137 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.430832 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.477982 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.510844 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.547500 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.593547 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.630257 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.797509 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.797509 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.797637 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:37 crc kubenswrapper[4747]: E1126 13:15:37.797710 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.974111 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a"} Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.974173 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4"} Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.974189 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df"} Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.974200 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766"} Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.974210 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54"} Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.974223 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482"} Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.976195 4747 generic.go:334] "Generic (PLEG): container finished" podID="405692d3-ec7c-4ebe-8d8f-d89f0de8a62a" containerID="9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a" exitCode=0 Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.976427 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" event={"ID":"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a","Type":"ContainerDied","Data":"9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a"} Nov 26 13:15:37 crc kubenswrapper[4747]: I1126 13:15:37.994620 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.013377 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.025726 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.042878 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.063961 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.081432 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.092470 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.104817 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.124322 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.138168 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.148502 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.159267 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.172936 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.189643 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.303674 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-t6mph"] Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.304089 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.307106 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.307106 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.307532 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.308355 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.322299 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.352926 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.394028 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.431696 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.440075 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/37f5069d-8915-40b7-b10d-59ed2d50516c-serviceca\") pod \"node-ca-t6mph\" (UID: \"37f5069d-8915-40b7-b10d-59ed2d50516c\") " pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.440391 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37f5069d-8915-40b7-b10d-59ed2d50516c-host\") pod \"node-ca-t6mph\" (UID: \"37f5069d-8915-40b7-b10d-59ed2d50516c\") " pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.440502 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttft9\" (UniqueName: \"kubernetes.io/projected/37f5069d-8915-40b7-b10d-59ed2d50516c-kube-api-access-ttft9\") pod \"node-ca-t6mph\" (UID: \"37f5069d-8915-40b7-b10d-59ed2d50516c\") " pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.472541 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.511598 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.541375 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37f5069d-8915-40b7-b10d-59ed2d50516c-host\") pod \"node-ca-t6mph\" (UID: \"37f5069d-8915-40b7-b10d-59ed2d50516c\") " pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.541421 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttft9\" (UniqueName: \"kubernetes.io/projected/37f5069d-8915-40b7-b10d-59ed2d50516c-kube-api-access-ttft9\") pod \"node-ca-t6mph\" (UID: \"37f5069d-8915-40b7-b10d-59ed2d50516c\") " pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.541446 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/37f5069d-8915-40b7-b10d-59ed2d50516c-serviceca\") pod \"node-ca-t6mph\" (UID: \"37f5069d-8915-40b7-b10d-59ed2d50516c\") " pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.541578 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37f5069d-8915-40b7-b10d-59ed2d50516c-host\") pod \"node-ca-t6mph\" (UID: \"37f5069d-8915-40b7-b10d-59ed2d50516c\") " pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.551709 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/37f5069d-8915-40b7-b10d-59ed2d50516c-serviceca\") pod \"node-ca-t6mph\" (UID: \"37f5069d-8915-40b7-b10d-59ed2d50516c\") " pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.555001 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.581099 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttft9\" (UniqueName: \"kubernetes.io/projected/37f5069d-8915-40b7-b10d-59ed2d50516c-kube-api-access-ttft9\") pod \"node-ca-t6mph\" (UID: \"37f5069d-8915-40b7-b10d-59ed2d50516c\") " pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.612854 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.617092 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t6mph" Nov 26 13:15:38 crc kubenswrapper[4747]: W1126 13:15:38.640577 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37f5069d_8915_40b7_b10d_59ed2d50516c.slice/crio-175e14009e423c4d9d4da5a2977be1b625029d439159416f8411619a96d1eaac WatchSource:0}: Error finding container 175e14009e423c4d9d4da5a2977be1b625029d439159416f8411619a96d1eaac: Status 404 returned error can't find the container with id 175e14009e423c4d9d4da5a2977be1b625029d439159416f8411619a96d1eaac Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.657428 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.694992 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.732792 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.775894 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.797932 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:38 crc kubenswrapper[4747]: E1126 13:15:38.798171 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.809634 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.850098 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.887575 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.982598 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t6mph" event={"ID":"37f5069d-8915-40b7-b10d-59ed2d50516c","Type":"ContainerStarted","Data":"72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862"} Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.982687 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t6mph" event={"ID":"37f5069d-8915-40b7-b10d-59ed2d50516c","Type":"ContainerStarted","Data":"175e14009e423c4d9d4da5a2977be1b625029d439159416f8411619a96d1eaac"} Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.986738 4747 generic.go:334] "Generic (PLEG): container finished" podID="405692d3-ec7c-4ebe-8d8f-d89f0de8a62a" containerID="dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90" exitCode=0 Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.986788 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" event={"ID":"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a","Type":"ContainerDied","Data":"dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90"} Nov 26 13:15:38 crc kubenswrapper[4747]: I1126 13:15:38.998644 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.014281 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.033859 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.054556 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.087254 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.127004 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.168730 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.208708 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.250583 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.290806 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.333732 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.370381 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.414920 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.470281 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.499404 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.532173 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.567349 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.609465 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.648223 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.690762 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.746922 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.773671 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.778702 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.781734 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.781892 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.782010 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.782298 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.798326 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:39 crc kubenswrapper[4747]: E1126 13:15:39.798471 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.798333 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:39 crc kubenswrapper[4747]: E1126 13:15:39.798572 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.817935 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.862448 4747 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.862887 4747 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.864551 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.864619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.864632 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.864651 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.864663 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:39Z","lastTransitionTime":"2025-11-26T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:39 crc kubenswrapper[4747]: E1126 13:15:39.886816 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.892020 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.892106 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.892127 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.892151 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.892172 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:39Z","lastTransitionTime":"2025-11-26T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.892522 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: E1126 13:15:39.907409 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.911747 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.911787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.911799 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.911817 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.911829 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:39Z","lastTransitionTime":"2025-11-26T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:39 crc kubenswrapper[4747]: E1126 13:15:39.925751 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.929721 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.929747 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.929758 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.929772 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.929783 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:39Z","lastTransitionTime":"2025-11-26T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.932536 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: E1126 13:15:39.949177 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.952110 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.952154 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.952174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.952197 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.952216 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:39Z","lastTransitionTime":"2025-11-26T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:39 crc kubenswrapper[4747]: E1126 13:15:39.967425 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: E1126 13:15:39.967544 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.968709 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.968755 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.968766 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.968779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.968788 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:39Z","lastTransitionTime":"2025-11-26T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.971385 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.992841 4747 generic.go:334] "Generic (PLEG): container finished" podID="405692d3-ec7c-4ebe-8d8f-d89f0de8a62a" containerID="ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7" exitCode=0 Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.992904 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" event={"ID":"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a","Type":"ContainerDied","Data":"ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7"} Nov 26 13:15:39 crc kubenswrapper[4747]: I1126 13:15:39.998799 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5"} Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.008270 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.050530 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.071429 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.071469 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.071480 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.071497 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.071508 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:40Z","lastTransitionTime":"2025-11-26T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.088266 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.146664 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.173986 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.174054 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.174082 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.174106 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.174119 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:40Z","lastTransitionTime":"2025-11-26T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.192846 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.216428 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.251222 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.277224 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.277312 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.277331 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.277363 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.277383 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:40Z","lastTransitionTime":"2025-11-26T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.286285 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.330236 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.380755 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.380821 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.380843 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.380868 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.380886 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:40Z","lastTransitionTime":"2025-11-26T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.386678 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.419906 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.458256 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.486427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.486475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.486488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.486511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.486525 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:40Z","lastTransitionTime":"2025-11-26T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.495189 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.528556 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.572480 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.589547 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.589592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.589606 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.589623 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.589637 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:40Z","lastTransitionTime":"2025-11-26T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.610671 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.652121 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.692934 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.693951 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.694006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.694017 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.694038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.694056 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:40Z","lastTransitionTime":"2025-11-26T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.742026 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:40Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.796691 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.796743 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.796754 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.796771 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.796782 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:40Z","lastTransitionTime":"2025-11-26T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.797410 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:40 crc kubenswrapper[4747]: E1126 13:15:40.797546 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.899647 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.899697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.899708 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.899728 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:40 crc kubenswrapper[4747]: I1126 13:15:40.899740 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:40Z","lastTransitionTime":"2025-11-26T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.001870 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.001925 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.001938 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.001953 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.001964 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:41Z","lastTransitionTime":"2025-11-26T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.007489 4747 generic.go:334] "Generic (PLEG): container finished" podID="405692d3-ec7c-4ebe-8d8f-d89f0de8a62a" containerID="e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f" exitCode=0 Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.007536 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" event={"ID":"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a","Type":"ContainerDied","Data":"e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.024771 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.046518 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.071755 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.097492 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.104568 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.104600 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.104609 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.104623 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.104633 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:41Z","lastTransitionTime":"2025-11-26T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.110955 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.122892 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.136901 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.148256 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.167388 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.167528 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.167560 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.167587 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:15:49.167558195 +0000 UTC m=+36.153869210 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.167672 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.167790 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.167803 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.167810 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:49.167791691 +0000 UTC m=+36.154102706 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.167814 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.167851 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:49.167844872 +0000 UTC m=+36.154155887 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.167735 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.167928 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.167658 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.168248 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:49.168235942 +0000 UTC m=+36.154546957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.168279 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.168294 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.168310 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.168392 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:15:49.168333334 +0000 UTC m=+36.154644450 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.176446 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.193040 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.208022 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.208083 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.208094 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.208116 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.208130 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:41Z","lastTransitionTime":"2025-11-26T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.208384 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.226001 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.254016 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.293883 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.310714 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.310768 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.310781 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.310801 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.310813 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:41Z","lastTransitionTime":"2025-11-26T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.329566 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.413582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.413654 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.413674 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.413697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.413720 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:41Z","lastTransitionTime":"2025-11-26T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.517852 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.517912 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.517928 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.517955 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.517973 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:41Z","lastTransitionTime":"2025-11-26T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.621965 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.622198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.622253 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.622290 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.622315 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:41Z","lastTransitionTime":"2025-11-26T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.725714 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.725772 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.725790 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.725816 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.725838 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:41Z","lastTransitionTime":"2025-11-26T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.797445 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.797467 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.797632 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:41 crc kubenswrapper[4747]: E1126 13:15:41.797997 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.829469 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.829523 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.829540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.829565 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.829582 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:41Z","lastTransitionTime":"2025-11-26T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.933164 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.933222 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.933244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.933273 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:41 crc kubenswrapper[4747]: I1126 13:15:41.933294 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:41Z","lastTransitionTime":"2025-11-26T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.017610 4747 generic.go:334] "Generic (PLEG): container finished" podID="405692d3-ec7c-4ebe-8d8f-d89f0de8a62a" containerID="c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8" exitCode=0 Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.017835 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" event={"ID":"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a","Type":"ContainerDied","Data":"c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.036445 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.036505 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.036522 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.036546 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.036562 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:42Z","lastTransitionTime":"2025-11-26T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.049655 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.088834 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.112483 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.133590 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.140221 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.140294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.140310 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.140353 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.140367 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:42Z","lastTransitionTime":"2025-11-26T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.147746 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.162120 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.181759 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.200612 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.220714 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.242280 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.247589 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.247644 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.247662 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.247688 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.247707 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:42Z","lastTransitionTime":"2025-11-26T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.256831 4747 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.262541 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.285299 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.311441 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.334289 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.349883 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:42Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.351461 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.351514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.351533 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.351557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.351575 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:42Z","lastTransitionTime":"2025-11-26T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.454141 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.454211 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.454234 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.454270 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.454295 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:42Z","lastTransitionTime":"2025-11-26T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.557125 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.557172 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.557185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.557202 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.557214 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:42Z","lastTransitionTime":"2025-11-26T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.660505 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.660550 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.660566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.660585 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.660598 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:42Z","lastTransitionTime":"2025-11-26T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.774189 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.774237 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.774255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.774279 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.774299 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:42Z","lastTransitionTime":"2025-11-26T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.797670 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:42 crc kubenswrapper[4747]: E1126 13:15:42.797852 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.877701 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.877789 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.877813 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.877843 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.877868 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:42Z","lastTransitionTime":"2025-11-26T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.980462 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.980524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.980542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.980566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:42 crc kubenswrapper[4747]: I1126 13:15:42.980584 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:42Z","lastTransitionTime":"2025-11-26T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.029951 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" event={"ID":"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a","Type":"ContainerStarted","Data":"e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.039267 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.039819 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.039862 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.039879 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.060843 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.083633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.083692 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.083718 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.083747 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.083773 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:43Z","lastTransitionTime":"2025-11-26T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.088926 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.089109 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.093094 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.119474 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.151801 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.174237 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.186993 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.187091 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.187118 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.187147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.187169 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:43Z","lastTransitionTime":"2025-11-26T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.198497 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.221009 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.242091 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.270575 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.290448 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.290523 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.290542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.290569 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.290599 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:43Z","lastTransitionTime":"2025-11-26T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.293701 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.313754 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.330142 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.348833 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.369768 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.388171 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.394169 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.394232 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.394249 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.394274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.394292 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:43Z","lastTransitionTime":"2025-11-26T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.406840 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.421405 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.437506 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.454196 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.456106 4747 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.468249 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.490601 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.496822 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.496879 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.496897 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.496956 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.496974 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:43Z","lastTransitionTime":"2025-11-26T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.507316 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.529315 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.549399 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.571505 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.600024 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.600090 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.600106 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.600128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.600146 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:43Z","lastTransitionTime":"2025-11-26T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.600281 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.619160 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.640173 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.660513 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.687853 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.703259 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.703344 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.703365 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.703394 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.703449 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:43Z","lastTransitionTime":"2025-11-26T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.798023 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.798240 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:43 crc kubenswrapper[4747]: E1126 13:15:43.798441 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:43 crc kubenswrapper[4747]: E1126 13:15:43.798575 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.805530 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.805594 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.805611 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.805633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.805650 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:43Z","lastTransitionTime":"2025-11-26T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.819713 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.861352 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.890781 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.908735 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.908806 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.908829 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.908860 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.908880 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:43Z","lastTransitionTime":"2025-11-26T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.911298 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.927461 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.942570 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.964635 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:43 crc kubenswrapper[4747]: I1126 13:15:43.985401 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.000713 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.012096 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.012164 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.012190 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.012226 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.012249 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:44Z","lastTransitionTime":"2025-11-26T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.016282 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.033163 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.067882 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.088436 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.107958 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.115336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.115395 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.115407 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.115433 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.115447 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:44Z","lastTransitionTime":"2025-11-26T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.126327 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.218692 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.218752 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.218774 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.218801 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.218826 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:44Z","lastTransitionTime":"2025-11-26T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.322135 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.322203 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.322214 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.322240 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.322254 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:44Z","lastTransitionTime":"2025-11-26T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.425185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.425243 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.425261 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.425289 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.425307 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:44Z","lastTransitionTime":"2025-11-26T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.528271 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.528322 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.528338 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.528367 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.528391 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:44Z","lastTransitionTime":"2025-11-26T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.630743 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.630800 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.630816 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.630841 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.630859 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:44Z","lastTransitionTime":"2025-11-26T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.733809 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.733852 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.733862 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.733877 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.733887 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:44Z","lastTransitionTime":"2025-11-26T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.798137 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:44 crc kubenswrapper[4747]: E1126 13:15:44.798326 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.836683 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.836769 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.836788 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.836814 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.836832 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:44Z","lastTransitionTime":"2025-11-26T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.939778 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.939836 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.939854 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.939878 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:44 crc kubenswrapper[4747]: I1126 13:15:44.939894 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:44Z","lastTransitionTime":"2025-11-26T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.042519 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.042661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.042680 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.042703 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.042722 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:45Z","lastTransitionTime":"2025-11-26T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.145494 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.145557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.145574 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.145598 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.145628 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:45Z","lastTransitionTime":"2025-11-26T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.249429 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.249764 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.249776 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.249793 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.249802 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:45Z","lastTransitionTime":"2025-11-26T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.277852 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.295614 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.313581 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.335113 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.355572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.355686 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.355756 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.355790 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.355823 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:45Z","lastTransitionTime":"2025-11-26T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.368867 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.386915 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.406452 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.425566 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.438775 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.451481 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.458680 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.458734 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.458748 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.458767 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.458781 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:45Z","lastTransitionTime":"2025-11-26T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.474920 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.491671 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.509593 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.543337 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.561965 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.562018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.562031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.562072 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.562088 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:45Z","lastTransitionTime":"2025-11-26T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.577022 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.598588 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.664223 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.664260 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.664271 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.664286 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.664297 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:45Z","lastTransitionTime":"2025-11-26T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.767012 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.767114 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.767132 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.767162 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.767185 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:45Z","lastTransitionTime":"2025-11-26T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.797350 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.797411 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:45 crc kubenswrapper[4747]: E1126 13:15:45.797600 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:45 crc kubenswrapper[4747]: E1126 13:15:45.797740 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.870920 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.870993 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.871010 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.871035 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.871085 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:45Z","lastTransitionTime":"2025-11-26T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.974429 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.974489 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.974508 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.974533 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:45 crc kubenswrapper[4747]: I1126 13:15:45.974554 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:45Z","lastTransitionTime":"2025-11-26T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.056590 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/0.log" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.060823 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4" exitCode=1 Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.060881 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4"} Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.062002 4747 scope.go:117] "RemoveContainer" containerID="3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.075571 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.077471 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.077531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.077550 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.077576 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.077594 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:46Z","lastTransitionTime":"2025-11-26T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.087693 4747 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.090733 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.103261 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.121718 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.138518 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.155167 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.168152 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.180938 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.180987 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.181009 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.181030 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.181043 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:46Z","lastTransitionTime":"2025-11-26T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.187212 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.211356 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.230416 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.248757 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.273533 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.295746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.295786 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.295802 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.295824 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.295841 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:46Z","lastTransitionTime":"2025-11-26T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.296611 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.319872 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"message\\\":\\\"-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:45.610577 6042 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:45.613657 6042 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:45.613699 6042 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:45.613719 6042 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:45.613735 6042 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:45.613760 6042 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 13:15:45.613759 6042 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:45.613766 6042 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 13:15:45.613779 6042 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:15:45.613790 6042 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:15:45.613792 6042 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:45.613801 6042 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 13:15:45.613817 6042 factory.go:656] Stopping watch factory\\\\nI1126 13:15:45.613834 6042 ovnkube.go:599] Stopped ovnkube\\\\nI1126 13:15:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.337514 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.398711 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.398775 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.398799 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.398827 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.398848 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:46Z","lastTransitionTime":"2025-11-26T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.502047 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.502131 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.502147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.502172 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.502189 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:46Z","lastTransitionTime":"2025-11-26T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.604366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.604432 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.604457 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.604485 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.604506 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:46Z","lastTransitionTime":"2025-11-26T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.707431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.707502 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.707521 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.707547 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.707567 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:46Z","lastTransitionTime":"2025-11-26T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.797544 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:46 crc kubenswrapper[4747]: E1126 13:15:46.797737 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.810399 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.810439 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.810452 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.810470 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.810481 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:46Z","lastTransitionTime":"2025-11-26T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.912586 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.912661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.912684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.912970 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:46 crc kubenswrapper[4747]: I1126 13:15:46.913362 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:46Z","lastTransitionTime":"2025-11-26T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.017128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.017188 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.017206 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.017229 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.017246 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:47Z","lastTransitionTime":"2025-11-26T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.066380 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/0.log" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.070308 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6"} Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.071138 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.085918 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.108672 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.120028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.120081 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.120093 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.120114 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.120128 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:47Z","lastTransitionTime":"2025-11-26T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.139297 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"message\\\":\\\"-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:45.610577 6042 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:45.613657 6042 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:45.613699 6042 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:45.613719 6042 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:45.613735 6042 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:45.613760 6042 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 13:15:45.613759 6042 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:45.613766 6042 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 13:15:45.613779 6042 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:15:45.613790 6042 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:15:45.613792 6042 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:45.613801 6042 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 13:15:45.613817 6042 factory.go:656] Stopping watch factory\\\\nI1126 13:15:45.613834 6042 ovnkube.go:599] Stopped ovnkube\\\\nI1126 13:15:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.154002 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.168371 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.184838 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.199009 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.215324 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.223472 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.223525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.223542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.223567 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.223583 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:47Z","lastTransitionTime":"2025-11-26T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.236323 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.261650 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.280316 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.301575 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.320926 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.326086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.326159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.326260 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.326381 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.326474 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:47Z","lastTransitionTime":"2025-11-26T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.345037 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.378937 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.415724 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd"] Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.416521 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.419185 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.420636 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.429657 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.429717 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.429728 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.429751 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.429761 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:47Z","lastTransitionTime":"2025-11-26T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.440527 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.459453 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.471803 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.484521 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.503370 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.517132 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.532130 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.532157 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.532165 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.532177 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.532185 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:47Z","lastTransitionTime":"2025-11-26T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.542564 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.542611 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.542643 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfrl\" (UniqueName: \"kubernetes.io/projected/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-kube-api-access-jdfrl\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.542695 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.544805 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.556883 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.568968 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.590288 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"message\\\":\\\"-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:45.610577 6042 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:45.613657 6042 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:45.613699 6042 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:45.613719 6042 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:45.613735 6042 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:45.613760 6042 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 13:15:45.613759 6042 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:45.613766 6042 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 13:15:45.613779 6042 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:15:45.613790 6042 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:15:45.613792 6042 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:45.613801 6042 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 13:15:45.613817 6042 factory.go:656] Stopping watch factory\\\\nI1126 13:15:45.613834 6042 ovnkube.go:599] Stopped ovnkube\\\\nI1126 13:15:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.604042 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.618153 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.634204 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.634526 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.634565 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.634579 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.634598 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.634612 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:47Z","lastTransitionTime":"2025-11-26T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.643357 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.643496 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.643558 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.643606 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfrl\" (UniqueName: \"kubernetes.io/projected/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-kube-api-access-jdfrl\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.644311 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.644776 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.648019 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.651755 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.661908 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.676642 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfrl\" (UniqueName: \"kubernetes.io/projected/12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea-kube-api-access-jdfrl\") pod \"ovnkube-control-plane-749d76644c-sxtwd\" (UID: \"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.680575 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.736537 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.736760 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.736808 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.736826 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.736853 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.736871 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:47Z","lastTransitionTime":"2025-11-26T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:47 crc kubenswrapper[4747]: W1126 13:15:47.762331 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12e5faae_7d45_4ac5_8dfc_b881dfb4c9ea.slice/crio-646064c3e24972ea2a1756f8be1ba3ef70e692f37ff0b1a6dbbf0642bce29195 WatchSource:0}: Error finding container 646064c3e24972ea2a1756f8be1ba3ef70e692f37ff0b1a6dbbf0642bce29195: Status 404 returned error can't find the container with id 646064c3e24972ea2a1756f8be1ba3ef70e692f37ff0b1a6dbbf0642bce29195 Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.797960 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:47 crc kubenswrapper[4747]: E1126 13:15:47.798327 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.798133 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:47 crc kubenswrapper[4747]: E1126 13:15:47.799266 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.841454 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.841529 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.841548 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.841572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.841591 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:47Z","lastTransitionTime":"2025-11-26T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.944796 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.944839 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.944850 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.944867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:47 crc kubenswrapper[4747]: I1126 13:15:47.944879 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:47Z","lastTransitionTime":"2025-11-26T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.047290 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.047340 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.047420 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.047465 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.047549 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:48Z","lastTransitionTime":"2025-11-26T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.077951 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/1.log" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.080420 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/0.log" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.083875 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6" exitCode=1 Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.084133 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.084369 4747 scope.go:117] "RemoveContainer" containerID="3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.085029 4747 scope.go:117] "RemoveContainer" containerID="f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6" Nov 26 13:15:48 crc kubenswrapper[4747]: E1126 13:15:48.085285 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.090824 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" event={"ID":"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea","Type":"ContainerStarted","Data":"e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.090901 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" event={"ID":"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea","Type":"ContainerStarted","Data":"646064c3e24972ea2a1756f8be1ba3ef70e692f37ff0b1a6dbbf0642bce29195"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.101979 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.132192 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.150085 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.150127 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.150137 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.150153 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.150164 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:48Z","lastTransitionTime":"2025-11-26T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.164289 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"message\\\":\\\"-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:45.610577 6042 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:45.613657 6042 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:45.613699 6042 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:45.613719 6042 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:45.613735 6042 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:45.613760 6042 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 13:15:45.613759 6042 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:45.613766 6042 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 13:15:45.613779 6042 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:15:45.613790 6042 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:15:45.613792 6042 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:45.613801 6042 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 13:15:45.613817 6042 factory.go:656] Stopping watch factory\\\\nI1126 13:15:45.613834 6042 ovnkube.go:599] Stopped ovnkube\\\\nI1126 13:15:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:47.387242 6181 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:47.387257 6181 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:47.387285 6181 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:47.387288 6181 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:15:47.387314 6181 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:47.387592 6181 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:15:47.387609 6181 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:15:47.387662 6181 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:15:47.387682 6181 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:47.387704 6181 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:47.387741 6181 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:15:47.387753 6181 factory.go:656] Stopping watch factory\\\\nI1126 13:15:47.387767 6181 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:15:47.387752 6181 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.182027 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.196450 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.210382 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.223573 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.238984 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.251776 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.251818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.251830 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.251847 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.251860 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:48Z","lastTransitionTime":"2025-11-26T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.252947 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.268151 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.286523 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.305201 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.321545 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.348149 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.354545 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.354596 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.354612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.354632 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.354646 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:48Z","lastTransitionTime":"2025-11-26T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.367577 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.394471 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:48Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.457481 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.457547 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.457563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.457585 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.457600 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:48Z","lastTransitionTime":"2025-11-26T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.560326 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.560393 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.560411 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.560436 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.560453 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:48Z","lastTransitionTime":"2025-11-26T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.664242 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.664321 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.664341 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.664371 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.664390 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:48Z","lastTransitionTime":"2025-11-26T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.767678 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.767748 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.767769 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.767794 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.767812 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:48Z","lastTransitionTime":"2025-11-26T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.797285 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:48 crc kubenswrapper[4747]: E1126 13:15:48.797454 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.870398 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.870468 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.870486 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.870511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.870528 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:48Z","lastTransitionTime":"2025-11-26T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.973238 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.973315 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.973332 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.973357 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:48 crc kubenswrapper[4747]: I1126 13:15:48.973375 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:48Z","lastTransitionTime":"2025-11-26T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.075767 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.075848 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.075868 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.075895 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.075912 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:49Z","lastTransitionTime":"2025-11-26T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.098850 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" event={"ID":"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea","Type":"ContainerStarted","Data":"0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.100782 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/1.log" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.104718 4747 scope.go:117] "RemoveContainer" containerID="f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6" Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.104857 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.118109 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.137743 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.162932 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa29832baff5fb11b5c27e996a95f3a9224708eefd7964d5cc02a901b07a0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"message\\\":\\\"-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:45.610577 6042 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:45.613657 6042 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:45.613699 6042 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:45.613719 6042 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:45.613735 6042 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:45.613760 6042 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 13:15:45.613759 6042 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:45.613766 6042 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 13:15:45.613779 6042 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:15:45.613790 6042 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:15:45.613792 6042 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:45.613801 6042 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 13:15:45.613817 6042 factory.go:656] Stopping watch factory\\\\nI1126 13:15:45.613834 6042 ovnkube.go:599] Stopped ovnkube\\\\nI1126 13:15:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:47.387242 6181 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:47.387257 6181 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:47.387285 6181 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:47.387288 6181 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:15:47.387314 6181 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:47.387592 6181 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:15:47.387609 6181 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:15:47.387662 6181 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:15:47.387682 6181 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:47.387704 6181 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:47.387741 6181 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:15:47.387753 6181 factory.go:656] Stopping watch factory\\\\nI1126 13:15:47.387767 6181 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:15:47.387752 6181 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.178825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.178858 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.178867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.178882 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.178891 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:49Z","lastTransitionTime":"2025-11-26T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.179953 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.193332 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.209662 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.222690 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.236837 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.257599 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.259807 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.260000 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.260077 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.260165 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.260235 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.260327 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.260392 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:16:05.26037612 +0000 UTC m=+52.246687135 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.260581 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.260637 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.260654 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.260673 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.260599 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:16:05.260527584 +0000 UTC m=+52.246838599 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.261443 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:16:05.261423626 +0000 UTC m=+52.247734651 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.261750 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.261792 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.261814 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.261864 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:16:05.261847896 +0000 UTC m=+52.248158911 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.262300 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:16:05.262284027 +0000 UTC m=+52.248595042 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.276693 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.281909 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.281942 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.281952 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.281970 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.281981 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:49Z","lastTransitionTime":"2025-11-26T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.302629 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-6zzh7"] Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.303181 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.303237 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.305934 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.323384 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.337803 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.356979 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.374189 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.384660 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.384706 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.384721 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.384743 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.384759 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:49Z","lastTransitionTime":"2025-11-26T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.391752 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.410497 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.426512 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.441811 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.459336 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.461746 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5sgx\" (UniqueName: \"kubernetes.io/projected/67391449-89bb-423a-b690-2f60a43ccfad-kube-api-access-c5sgx\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.461832 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.475729 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.487244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.487291 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.487303 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.487319 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.487331 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:49Z","lastTransitionTime":"2025-11-26T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.496668 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.515697 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.530567 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.545838 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.562637 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.562965 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5sgx\" (UniqueName: \"kubernetes.io/projected/67391449-89bb-423a-b690-2f60a43ccfad-kube-api-access-c5sgx\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.563033 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.563199 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.563286 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs podName:67391449-89bb-423a-b690-2f60a43ccfad nodeName:}" failed. No retries permitted until 2025-11-26 13:15:50.063262029 +0000 UTC m=+37.049573064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs") pod "network-metrics-daemon-6zzh7" (UID: "67391449-89bb-423a-b690-2f60a43ccfad") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.578732 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.587676 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5sgx\" (UniqueName: \"kubernetes.io/projected/67391449-89bb-423a-b690-2f60a43ccfad-kube-api-access-c5sgx\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.589208 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.589233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.589243 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.589261 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.589274 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:49Z","lastTransitionTime":"2025-11-26T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.594756 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.606696 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.616088 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.629874 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.644554 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.660116 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:47.387242 6181 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:47.387257 6181 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:47.387285 6181 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:47.387288 6181 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:15:47.387314 6181 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:47.387592 6181 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:15:47.387609 6181 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:15:47.387662 6181 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:15:47.387682 6181 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:47.387704 6181 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:47.387741 6181 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:15:47.387753 6181 factory.go:656] Stopping watch factory\\\\nI1126 13:15:47.387767 6181 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:15:47.387752 6181 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:49Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.691094 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.691172 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.691182 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.691197 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.691209 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:49Z","lastTransitionTime":"2025-11-26T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.793444 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.793494 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.793505 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.793520 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.793533 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:49Z","lastTransitionTime":"2025-11-26T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.798036 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.798109 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.798174 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:49 crc kubenswrapper[4747]: E1126 13:15:49.798241 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.896214 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.896261 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.896269 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.896288 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.896298 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:49Z","lastTransitionTime":"2025-11-26T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.999096 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.999158 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.999174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.999205 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:49 crc kubenswrapper[4747]: I1126 13:15:49.999223 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:49Z","lastTransitionTime":"2025-11-26T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.068584 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:50 crc kubenswrapper[4747]: E1126 13:15:50.068921 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:15:50 crc kubenswrapper[4747]: E1126 13:15:50.069118 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs podName:67391449-89bb-423a-b690-2f60a43ccfad nodeName:}" failed. No retries permitted until 2025-11-26 13:15:51.069087336 +0000 UTC m=+38.055398391 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs") pod "network-metrics-daemon-6zzh7" (UID: "67391449-89bb-423a-b690-2f60a43ccfad") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.102158 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.102255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.102275 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.102300 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.102318 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.205172 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.205223 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.205240 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.205268 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.205287 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.216511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.216551 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.216568 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.216592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.216611 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: E1126 13:15:50.240856 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:50Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.246040 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.246144 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.246170 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.246203 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.246384 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: E1126 13:15:50.269485 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:50Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.274325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.274379 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.274397 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.274421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.274443 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: E1126 13:15:50.291716 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:50Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.297897 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.297952 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.297969 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.297993 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.298012 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: E1126 13:15:50.318460 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:50Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.323648 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.323697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.323713 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.323736 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.323756 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: E1126 13:15:50.343538 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:50Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:50 crc kubenswrapper[4747]: E1126 13:15:50.343799 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.345847 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.345900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.345917 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.345937 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.345953 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.449505 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.449606 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.449641 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.449677 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.449700 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.553514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.553604 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.553629 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.553664 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.553688 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.657478 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.657538 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.657558 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.657584 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.657625 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.760335 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.760412 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.760429 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.760456 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.760504 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.797625 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:50 crc kubenswrapper[4747]: E1126 13:15:50.797800 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.797894 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:50 crc kubenswrapper[4747]: E1126 13:15:50.797990 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.864339 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.864399 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.864416 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.864440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.864459 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.967755 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.967814 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.967831 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.967857 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:50 crc kubenswrapper[4747]: I1126 13:15:50.967876 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:50Z","lastTransitionTime":"2025-11-26T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.071248 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.071322 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.071348 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.071382 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.071408 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:51Z","lastTransitionTime":"2025-11-26T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.079952 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:51 crc kubenswrapper[4747]: E1126 13:15:51.080214 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:15:51 crc kubenswrapper[4747]: E1126 13:15:51.080318 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs podName:67391449-89bb-423a-b690-2f60a43ccfad nodeName:}" failed. No retries permitted until 2025-11-26 13:15:53.08028906 +0000 UTC m=+40.066600105 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs") pod "network-metrics-daemon-6zzh7" (UID: "67391449-89bb-423a-b690-2f60a43ccfad") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.174294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.174362 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.174379 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.174409 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.174430 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:51Z","lastTransitionTime":"2025-11-26T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.277491 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.277556 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.277573 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.277599 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.277616 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:51Z","lastTransitionTime":"2025-11-26T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.380586 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.380664 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.380685 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.380709 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.380728 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:51Z","lastTransitionTime":"2025-11-26T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.484646 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.484692 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.484711 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.484742 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.484759 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:51Z","lastTransitionTime":"2025-11-26T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.587680 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.587766 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.587790 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.587819 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.587839 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:51Z","lastTransitionTime":"2025-11-26T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.691181 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.691267 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.691344 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.691382 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.691404 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:51Z","lastTransitionTime":"2025-11-26T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.793973 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.794133 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.794176 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.794214 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.794237 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:51Z","lastTransitionTime":"2025-11-26T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.798139 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.798149 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:51 crc kubenswrapper[4747]: E1126 13:15:51.798380 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:51 crc kubenswrapper[4747]: E1126 13:15:51.798530 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.897798 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.897875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.897900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.897929 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:51 crc kubenswrapper[4747]: I1126 13:15:51.897946 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:51Z","lastTransitionTime":"2025-11-26T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.001831 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.002405 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.002425 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.002459 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.002477 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:52Z","lastTransitionTime":"2025-11-26T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.105418 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.105530 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.105560 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.105603 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.105635 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:52Z","lastTransitionTime":"2025-11-26T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.208455 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.208520 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.208538 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.208562 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.208581 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:52Z","lastTransitionTime":"2025-11-26T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.311490 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.311563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.311585 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.311615 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.311637 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:52Z","lastTransitionTime":"2025-11-26T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.414359 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.414428 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.414447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.414476 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.414494 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:52Z","lastTransitionTime":"2025-11-26T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.517446 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.517511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.517529 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.517559 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.517577 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:52Z","lastTransitionTime":"2025-11-26T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.620511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.620585 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.620609 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.620638 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.620659 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:52Z","lastTransitionTime":"2025-11-26T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.724009 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.724147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.724168 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.724191 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.724210 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:52Z","lastTransitionTime":"2025-11-26T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.797928 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.797952 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:52 crc kubenswrapper[4747]: E1126 13:15:52.798237 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:15:52 crc kubenswrapper[4747]: E1126 13:15:52.798402 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.827757 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.827833 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.827852 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.827877 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.827895 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:52Z","lastTransitionTime":"2025-11-26T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.930860 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.930919 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.930944 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.930969 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:52 crc kubenswrapper[4747]: I1126 13:15:52.930986 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:52Z","lastTransitionTime":"2025-11-26T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.034288 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.034366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.034391 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.034427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.034448 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:53Z","lastTransitionTime":"2025-11-26T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.109576 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:53 crc kubenswrapper[4747]: E1126 13:15:53.109825 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:15:53 crc kubenswrapper[4747]: E1126 13:15:53.109929 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs podName:67391449-89bb-423a-b690-2f60a43ccfad nodeName:}" failed. No retries permitted until 2025-11-26 13:15:57.109900125 +0000 UTC m=+44.096211170 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs") pod "network-metrics-daemon-6zzh7" (UID: "67391449-89bb-423a-b690-2f60a43ccfad") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.137349 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.137411 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.137431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.137471 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.137503 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:53Z","lastTransitionTime":"2025-11-26T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.239711 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.239770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.239788 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.239811 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.239828 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:53Z","lastTransitionTime":"2025-11-26T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.342737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.342807 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.342828 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.342856 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.342879 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:53Z","lastTransitionTime":"2025-11-26T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.446770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.446838 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.446865 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.446891 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.446907 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:53Z","lastTransitionTime":"2025-11-26T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.549473 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.549547 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.549670 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.549705 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.549728 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:53Z","lastTransitionTime":"2025-11-26T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.653233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.653294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.653310 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.653330 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.653344 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:53Z","lastTransitionTime":"2025-11-26T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.756675 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.756746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.756770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.756802 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.756826 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:53Z","lastTransitionTime":"2025-11-26T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.798216 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:53 crc kubenswrapper[4747]: E1126 13:15:53.798460 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.798565 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:53 crc kubenswrapper[4747]: E1126 13:15:53.798698 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.814152 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.844380 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:47.387242 6181 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:47.387257 6181 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:47.387285 6181 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:47.387288 6181 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:15:47.387314 6181 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:47.387592 6181 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:15:47.387609 6181 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:15:47.387662 6181 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:15:47.387682 6181 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:47.387704 6181 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:47.387741 6181 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:15:47.387753 6181 factory.go:656] Stopping watch factory\\\\nI1126 13:15:47.387767 6181 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:15:47.387752 6181 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.858739 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.859158 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.859389 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.859595 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.859784 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:53Z","lastTransitionTime":"2025-11-26T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.865432 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.879896 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.897983 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.917859 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.932499 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.944864 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.962367 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.962389 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.962398 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.962412 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.962420 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:53Z","lastTransitionTime":"2025-11-26T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.963748 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.982506 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:53 crc kubenswrapper[4747]: I1126 13:15:53.997711 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:53Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.014808 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.038863 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.060612 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.065648 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.065700 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.065716 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.065739 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.065755 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:54Z","lastTransitionTime":"2025-11-26T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.081665 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.099431 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.115513 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:15:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.168028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.168145 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.168168 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.168197 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.168220 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:54Z","lastTransitionTime":"2025-11-26T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.270878 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.270947 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.270964 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.270988 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.271007 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:54Z","lastTransitionTime":"2025-11-26T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.374176 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.374232 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.374249 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.374274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.374292 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:54Z","lastTransitionTime":"2025-11-26T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.476906 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.476970 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.476996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.477026 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.477048 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:54Z","lastTransitionTime":"2025-11-26T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.579751 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.580146 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.580345 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.580490 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.580619 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:54Z","lastTransitionTime":"2025-11-26T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.683875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.683954 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.683976 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.684006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.684028 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:54Z","lastTransitionTime":"2025-11-26T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.786508 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.786582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.786606 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.786635 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.786659 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:54Z","lastTransitionTime":"2025-11-26T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.797892 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.797979 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:54 crc kubenswrapper[4747]: E1126 13:15:54.798091 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:15:54 crc kubenswrapper[4747]: E1126 13:15:54.798221 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.890094 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.890152 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.890174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.890204 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.890226 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:54Z","lastTransitionTime":"2025-11-26T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.992697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.992764 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.992787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.992815 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:54 crc kubenswrapper[4747]: I1126 13:15:54.992837 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:54Z","lastTransitionTime":"2025-11-26T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.096253 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.096315 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.096332 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.096355 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.096373 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:55Z","lastTransitionTime":"2025-11-26T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.199659 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.199726 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.199745 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.199770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.199788 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:55Z","lastTransitionTime":"2025-11-26T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.302863 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.303751 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.303934 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.304110 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.304263 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:55Z","lastTransitionTime":"2025-11-26T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.407772 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.407843 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.407861 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.407890 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.407907 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:55Z","lastTransitionTime":"2025-11-26T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.511282 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.511366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.511393 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.511427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.511450 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:55Z","lastTransitionTime":"2025-11-26T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.614416 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.614473 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.614490 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.614514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.614531 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:55Z","lastTransitionTime":"2025-11-26T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.717623 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.717686 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.717704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.717728 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.717745 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:55Z","lastTransitionTime":"2025-11-26T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.798305 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.798332 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:55 crc kubenswrapper[4747]: E1126 13:15:55.798535 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:55 crc kubenswrapper[4747]: E1126 13:15:55.798686 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.821708 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.821796 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.821825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.821857 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.821881 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:55Z","lastTransitionTime":"2025-11-26T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.924493 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.924571 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.924596 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.924625 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:55 crc kubenswrapper[4747]: I1126 13:15:55.924649 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:55Z","lastTransitionTime":"2025-11-26T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.027204 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.027232 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.027241 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.027255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.027266 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:56Z","lastTransitionTime":"2025-11-26T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.130688 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.130763 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.130785 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.130811 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.130831 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:56Z","lastTransitionTime":"2025-11-26T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.233441 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.233809 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.233959 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.234157 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.234309 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:56Z","lastTransitionTime":"2025-11-26T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.337460 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.337531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.337544 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.337564 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.337580 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:56Z","lastTransitionTime":"2025-11-26T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.440446 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.440527 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.440550 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.440584 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.440608 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:56Z","lastTransitionTime":"2025-11-26T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.543882 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.543943 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.543960 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.543985 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.544002 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:56Z","lastTransitionTime":"2025-11-26T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.646643 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.646701 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.646718 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.646741 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.646759 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:56Z","lastTransitionTime":"2025-11-26T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.749688 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.749756 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.749776 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.749800 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.749818 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:56Z","lastTransitionTime":"2025-11-26T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.798159 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.798236 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:56 crc kubenswrapper[4747]: E1126 13:15:56.798418 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:56 crc kubenswrapper[4747]: E1126 13:15:56.798571 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.853175 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.853304 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.853344 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.854138 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.854191 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:56Z","lastTransitionTime":"2025-11-26T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.957215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.957443 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.957474 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.957937 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:56 crc kubenswrapper[4747]: I1126 13:15:56.958037 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:56Z","lastTransitionTime":"2025-11-26T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.061194 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.061259 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.061277 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.061307 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.061325 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:57Z","lastTransitionTime":"2025-11-26T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.150434 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:57 crc kubenswrapper[4747]: E1126 13:15:57.150633 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:15:57 crc kubenswrapper[4747]: E1126 13:15:57.150706 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs podName:67391449-89bb-423a-b690-2f60a43ccfad nodeName:}" failed. No retries permitted until 2025-11-26 13:16:05.150682267 +0000 UTC m=+52.136993312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs") pod "network-metrics-daemon-6zzh7" (UID: "67391449-89bb-423a-b690-2f60a43ccfad") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.164424 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.164476 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.164493 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.164521 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.164546 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:57Z","lastTransitionTime":"2025-11-26T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.268134 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.268180 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.268198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.268223 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.268241 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:57Z","lastTransitionTime":"2025-11-26T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.371004 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.371076 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.371093 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.371115 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.371131 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:57Z","lastTransitionTime":"2025-11-26T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.475393 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.475453 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.475471 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.475494 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.475510 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:57Z","lastTransitionTime":"2025-11-26T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.577920 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.577962 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.577969 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.577985 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.577995 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:57Z","lastTransitionTime":"2025-11-26T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.680930 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.681349 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.681507 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.681646 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.681785 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:57Z","lastTransitionTime":"2025-11-26T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.785543 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.785597 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.785615 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.785642 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.785661 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:57Z","lastTransitionTime":"2025-11-26T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.797913 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.797949 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:57 crc kubenswrapper[4747]: E1126 13:15:57.798138 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:57 crc kubenswrapper[4747]: E1126 13:15:57.798362 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.887891 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.887952 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.887976 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.888002 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.888024 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:57Z","lastTransitionTime":"2025-11-26T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.990778 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.990918 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.990944 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.990974 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:57 crc kubenswrapper[4747]: I1126 13:15:57.990996 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:57Z","lastTransitionTime":"2025-11-26T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.094121 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.094185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.094207 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.094236 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.094258 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:58Z","lastTransitionTime":"2025-11-26T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.196141 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.196185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.196202 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.196223 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.196240 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:58Z","lastTransitionTime":"2025-11-26T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.298995 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.299043 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.299084 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.299107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.299126 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:58Z","lastTransitionTime":"2025-11-26T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.401673 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.401731 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.401749 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.401772 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.401792 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:58Z","lastTransitionTime":"2025-11-26T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.504892 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.505015 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.505130 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.505175 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.505211 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:58Z","lastTransitionTime":"2025-11-26T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.607754 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.607842 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.607859 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.607882 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.607901 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:58Z","lastTransitionTime":"2025-11-26T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.710884 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.710943 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.710968 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.710998 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.711020 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:58Z","lastTransitionTime":"2025-11-26T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.798196 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.798219 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:15:58 crc kubenswrapper[4747]: E1126 13:15:58.798356 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:15:58 crc kubenswrapper[4747]: E1126 13:15:58.798510 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.813747 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.813808 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.813825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.813850 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.813866 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:58Z","lastTransitionTime":"2025-11-26T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.916527 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.916590 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.916609 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.916641 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:58 crc kubenswrapper[4747]: I1126 13:15:58.916661 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:58Z","lastTransitionTime":"2025-11-26T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.019693 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.019770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.019796 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.019829 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.019851 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:59Z","lastTransitionTime":"2025-11-26T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.121937 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.121974 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.121986 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.122002 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.122013 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:59Z","lastTransitionTime":"2025-11-26T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.224138 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.224193 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.224209 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.224236 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.224252 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:59Z","lastTransitionTime":"2025-11-26T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.327228 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.327300 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.327325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.327354 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.327374 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:59Z","lastTransitionTime":"2025-11-26T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.429944 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.430009 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.430027 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.430075 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.430106 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:59Z","lastTransitionTime":"2025-11-26T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.533406 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.533462 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.533480 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.533506 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.533525 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:59Z","lastTransitionTime":"2025-11-26T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.635987 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.636097 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.636125 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.636154 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.636175 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:59Z","lastTransitionTime":"2025-11-26T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.739947 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.740025 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.740043 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.740102 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.740121 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:59Z","lastTransitionTime":"2025-11-26T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.797836 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.797961 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:15:59 crc kubenswrapper[4747]: E1126 13:15:59.798030 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:15:59 crc kubenswrapper[4747]: E1126 13:15:59.798176 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.843591 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.843883 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.844022 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.844226 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.844353 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:59Z","lastTransitionTime":"2025-11-26T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.948094 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.948168 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.948190 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.948219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:15:59 crc kubenswrapper[4747]: I1126 13:15:59.948241 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:15:59Z","lastTransitionTime":"2025-11-26T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.051576 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.051651 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.051677 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.051712 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.051735 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.154363 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.154425 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.154442 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.154467 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.154485 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.257107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.257178 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.257199 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.257224 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.257244 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.359426 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.359484 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.359501 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.359525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.359541 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.462149 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.462238 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.462290 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.462314 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.462363 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.525430 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.525499 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.525517 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.525542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.525558 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: E1126 13:16:00.546220 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.550642 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.550697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.550714 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.550737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.550755 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: E1126 13:16:00.566873 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.571548 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.571592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.571603 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.571621 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.571634 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: E1126 13:16:00.592518 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.596922 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.596963 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.596974 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.596994 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.597007 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: E1126 13:16:00.616779 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.622110 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.622154 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.622166 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.622184 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.622196 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: E1126 13:16:00.637967 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:00 crc kubenswrapper[4747]: E1126 13:16:00.638160 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.639936 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.639964 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.639972 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.639985 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.639994 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.742568 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.742630 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.742642 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.742662 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.742674 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.797694 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.797769 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:00 crc kubenswrapper[4747]: E1126 13:16:00.797901 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:00 crc kubenswrapper[4747]: E1126 13:16:00.798012 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.846090 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.846465 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.846608 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.846757 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.846885 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.950377 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.950441 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.950458 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.950484 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:00 crc kubenswrapper[4747]: I1126 13:16:00.950503 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:00Z","lastTransitionTime":"2025-11-26T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.054431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.054839 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.054994 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.055256 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.055463 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:01Z","lastTransitionTime":"2025-11-26T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.157871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.157992 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.158013 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.158143 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.158175 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:01Z","lastTransitionTime":"2025-11-26T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.261496 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.261557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.261574 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.261598 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.261614 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:01Z","lastTransitionTime":"2025-11-26T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.364246 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.364388 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.364408 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.364431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.364485 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:01Z","lastTransitionTime":"2025-11-26T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.467768 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.467827 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.467846 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.467871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.467890 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:01Z","lastTransitionTime":"2025-11-26T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.571288 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.571362 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.571380 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.571808 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.571862 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:01Z","lastTransitionTime":"2025-11-26T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.675670 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.675763 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.675787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.676293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.676578 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:01Z","lastTransitionTime":"2025-11-26T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.780447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.780532 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.780553 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.780588 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.780608 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:01Z","lastTransitionTime":"2025-11-26T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.799299 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.799449 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:01 crc kubenswrapper[4747]: E1126 13:16:01.799619 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:01 crc kubenswrapper[4747]: E1126 13:16:01.799799 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.882899 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.882965 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.882981 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.883006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.883050 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:01Z","lastTransitionTime":"2025-11-26T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.986024 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.986174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.986193 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.986217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:01 crc kubenswrapper[4747]: I1126 13:16:01.986237 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:01Z","lastTransitionTime":"2025-11-26T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.089639 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.089698 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.089714 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.089739 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.089758 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:02Z","lastTransitionTime":"2025-11-26T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.192986 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.193040 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.193099 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.193122 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.193140 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:02Z","lastTransitionTime":"2025-11-26T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.296091 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.296150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.296167 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.296195 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.296215 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:02Z","lastTransitionTime":"2025-11-26T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.399150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.399217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.399239 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.399267 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.399289 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:02Z","lastTransitionTime":"2025-11-26T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.502665 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.502720 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.502737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.502759 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.502776 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:02Z","lastTransitionTime":"2025-11-26T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.605445 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.605523 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.605545 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.605575 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.605593 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:02Z","lastTransitionTime":"2025-11-26T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.708867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.708933 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.708952 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.708978 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.708995 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:02Z","lastTransitionTime":"2025-11-26T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.798097 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.798168 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:02 crc kubenswrapper[4747]: E1126 13:16:02.798266 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:02 crc kubenswrapper[4747]: E1126 13:16:02.798359 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.811339 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.811401 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.811423 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.811449 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.811466 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:02Z","lastTransitionTime":"2025-11-26T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.914434 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.914496 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.914514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.914537 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:02 crc kubenswrapper[4747]: I1126 13:16:02.914555 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:02Z","lastTransitionTime":"2025-11-26T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.017652 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.017708 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.017724 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.017746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.017765 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:03Z","lastTransitionTime":"2025-11-26T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.119980 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.120042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.120093 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.120122 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.120144 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:03Z","lastTransitionTime":"2025-11-26T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.222631 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.222716 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.222739 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.222769 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.222791 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:03Z","lastTransitionTime":"2025-11-26T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.325797 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.325876 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.325900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.325927 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.325946 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:03Z","lastTransitionTime":"2025-11-26T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.427898 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.427999 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.428018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.428043 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.428090 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:03Z","lastTransitionTime":"2025-11-26T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.531655 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.531703 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.531718 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.531736 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.531750 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:03Z","lastTransitionTime":"2025-11-26T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.634416 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.634474 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.634492 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.634516 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.634534 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:03Z","lastTransitionTime":"2025-11-26T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.737275 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.737358 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.737383 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.737419 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.737438 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:03Z","lastTransitionTime":"2025-11-26T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.797598 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.797641 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:03 crc kubenswrapper[4747]: E1126 13:16:03.797762 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:03 crc kubenswrapper[4747]: E1126 13:16:03.797895 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.822531 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.839827 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.839879 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.839896 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.839921 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.839944 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:03Z","lastTransitionTime":"2025-11-26T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.856805 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:47.387242 6181 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:47.387257 6181 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:47.387285 6181 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:47.387288 6181 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:15:47.387314 6181 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:47.387592 6181 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:15:47.387609 6181 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:15:47.387662 6181 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:15:47.387682 6181 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:47.387704 6181 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:47.387741 6181 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:15:47.387753 6181 factory.go:656] Stopping watch factory\\\\nI1126 13:15:47.387767 6181 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:15:47.387752 6181 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.880296 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.900243 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.917872 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.934036 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.942945 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.942989 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.943000 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.943018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.943030 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:03Z","lastTransitionTime":"2025-11-26T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.949730 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.966934 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:03 crc kubenswrapper[4747]: I1126 13:16:03.983986 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.005565 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.023807 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.042681 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.045996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.046047 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.046091 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.046119 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.046136 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:04Z","lastTransitionTime":"2025-11-26T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.067036 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.085227 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.116791 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.133757 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.148580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.148634 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.148646 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.148668 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.148683 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:04Z","lastTransitionTime":"2025-11-26T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.153935 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.251527 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.251589 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.251612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.251642 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.251660 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:04Z","lastTransitionTime":"2025-11-26T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.355156 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.355263 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.355319 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.355347 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.355401 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:04Z","lastTransitionTime":"2025-11-26T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.458498 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.458546 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.458563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.458587 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.458606 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:04Z","lastTransitionTime":"2025-11-26T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.562340 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.563270 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.563427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.563567 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.563699 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:04Z","lastTransitionTime":"2025-11-26T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.666744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.666803 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.666820 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.666848 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.666866 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:04Z","lastTransitionTime":"2025-11-26T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.770102 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.770160 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.770179 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.770200 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.770217 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:04Z","lastTransitionTime":"2025-11-26T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.797574 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.797569 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:04 crc kubenswrapper[4747]: E1126 13:16:04.797774 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:04 crc kubenswrapper[4747]: E1126 13:16:04.797992 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.799263 4747 scope.go:117] "RemoveContainer" containerID="f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.879343 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.879399 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.879416 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.879443 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.879460 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:04Z","lastTransitionTime":"2025-11-26T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.983357 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.984417 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.984752 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.984946 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:04 crc kubenswrapper[4747]: I1126 13:16:04.985274 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:04Z","lastTransitionTime":"2025-11-26T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.088968 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.089024 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.089042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.089096 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.089116 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:05Z","lastTransitionTime":"2025-11-26T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.157170 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.167871 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/1.log" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.171958 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22"} Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.172480 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.173948 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.191479 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.191513 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.191524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.191540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.191551 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:05Z","lastTransitionTime":"2025-11-26T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.208459 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.227690 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.243313 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.243399 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs podName:67391449-89bb-423a-b690-2f60a43ccfad nodeName:}" failed. No retries permitted until 2025-11-26 13:16:21.24338027 +0000 UTC m=+68.229691295 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs") pod "network-metrics-daemon-6zzh7" (UID: "67391449-89bb-423a-b690-2f60a43ccfad") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.243578 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.243779 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.260708 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.275510 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.294707 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.294775 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.294792 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.294818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.294835 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:05Z","lastTransitionTime":"2025-11-26T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.301429 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.317907 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.332018 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.344665 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.344797 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.344933 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.344976 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345029 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:16:37.345011123 +0000 UTC m=+84.331322138 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345046 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345104 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345118 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345122 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.345164 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345201 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:16:37.345181667 +0000 UTC m=+84.331492692 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345208 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345255 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:16:37.345248059 +0000 UTC m=+84.331559074 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345319 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:16:37.34528568 +0000 UTC m=+84.331596715 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345353 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345365 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345374 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.345419 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:16:37.345413043 +0000 UTC m=+84.331724058 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.346364 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.360457 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.375701 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.398025 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.398154 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.398176 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.398202 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.398226 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:05Z","lastTransitionTime":"2025-11-26T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.411090 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:47.387242 6181 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:47.387257 6181 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:47.387285 6181 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:47.387288 6181 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:15:47.387314 6181 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:47.387592 6181 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:15:47.387609 6181 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:15:47.387662 6181 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:15:47.387682 6181 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:47.387704 6181 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:47.387741 6181 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:15:47.387753 6181 factory.go:656] Stopping watch factory\\\\nI1126 13:15:47.387767 6181 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:15:47.387752 6181 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.431351 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.447281 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.460182 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.473571 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.483791 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.501328 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.501386 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.501396 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.501412 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.501441 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:05Z","lastTransitionTime":"2025-11-26T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.501844 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.515111 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.536236 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:47.387242 6181 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:47.387257 6181 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:47.387285 6181 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:47.387288 6181 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:15:47.387314 6181 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:47.387592 6181 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:15:47.387609 6181 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:15:47.387662 6181 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:15:47.387682 6181 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:47.387704 6181 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:47.387741 6181 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:15:47.387753 6181 factory.go:656] Stopping watch factory\\\\nI1126 13:15:47.387767 6181 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:15:47.387752 6181 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.551778 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.565321 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.576556 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.589465 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.601563 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.604117 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.604183 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.604205 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.604249 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.604271 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:05Z","lastTransitionTime":"2025-11-26T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.622006 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.639485 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.655659 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.671337 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.690694 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.707394 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.707440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.707449 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.707464 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.707474 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:05Z","lastTransitionTime":"2025-11-26T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.708654 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.722491 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.753036 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.768957 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.787159 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.797320 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.797412 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.797478 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:05 crc kubenswrapper[4747]: E1126 13:16:05.797641 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.809846 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.809878 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.809886 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.809899 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.809909 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:05Z","lastTransitionTime":"2025-11-26T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.912293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.912342 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.912354 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.912369 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:05 crc kubenswrapper[4747]: I1126 13:16:05.912381 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:05Z","lastTransitionTime":"2025-11-26T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.016304 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.016389 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.016409 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.016474 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.016493 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:06Z","lastTransitionTime":"2025-11-26T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.121702 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.121774 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.121792 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.121819 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.121837 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:06Z","lastTransitionTime":"2025-11-26T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.179734 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/2.log" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.180749 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/1.log" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.186300 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.186404 4747 scope.go:117] "RemoveContainer" containerID="f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.187161 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22" exitCode=1 Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.189117 4747 scope.go:117] "RemoveContainer" containerID="a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22" Nov 26 13:16:06 crc kubenswrapper[4747]: E1126 13:16:06.190194 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.208483 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.224383 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.224450 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.224522 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.224548 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.224631 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:06Z","lastTransitionTime":"2025-11-26T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.225972 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.241282 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.263818 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.296849 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f760ede28b7c72d3d60a64db012b7e4d55d0f73200e3c481c0ca09cbc286b4f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:15:47.387242 6181 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:15:47.387257 6181 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:15:47.387285 6181 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:15:47.387288 6181 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:15:47.387314 6181 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:15:47.387592 6181 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:15:47.387609 6181 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:15:47.387662 6181 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:15:47.387682 6181 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:15:47.387704 6181 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:15:47.387741 6181 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:15:47.387753 6181 factory.go:656] Stopping watch factory\\\\nI1126 13:15:47.387767 6181 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:15:47.387752 6181 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"message\\\":\\\"openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc004647f7b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.4.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1126 13:16:05.866396 6398 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.318445 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.328111 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.328156 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.328172 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.328195 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.328213 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:06Z","lastTransitionTime":"2025-11-26T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.337768 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.354807 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.370468 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.386308 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.405519 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.426943 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.431552 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.431641 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.431658 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.431682 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.431698 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:06Z","lastTransitionTime":"2025-11-26T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.448740 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.473050 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.495344 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.520922 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.535251 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.535303 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.535325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.535354 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.535375 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:06Z","lastTransitionTime":"2025-11-26T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.541632 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.575195 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:06Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.638411 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.638499 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.638558 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.638583 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.638600 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:06Z","lastTransitionTime":"2025-11-26T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.742900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.743180 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.743347 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.743523 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.743648 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:06Z","lastTransitionTime":"2025-11-26T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.797473 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.797633 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:06 crc kubenswrapper[4747]: E1126 13:16:06.797801 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:06 crc kubenswrapper[4747]: E1126 13:16:06.797930 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.847051 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.847172 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.847198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.847230 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.847256 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:06Z","lastTransitionTime":"2025-11-26T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.950395 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.950758 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.950892 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.951095 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:06 crc kubenswrapper[4747]: I1126 13:16:06.951258 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:06Z","lastTransitionTime":"2025-11-26T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.054636 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.054860 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.055245 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.055396 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.055540 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:07Z","lastTransitionTime":"2025-11-26T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.158516 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.158563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.158582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.158606 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.158624 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:07Z","lastTransitionTime":"2025-11-26T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.193926 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/2.log" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.198860 4747 scope.go:117] "RemoveContainer" containerID="a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22" Nov 26 13:16:07 crc kubenswrapper[4747]: E1126 13:16:07.199161 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.216001 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.233648 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.254738 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.263716 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.263787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.263813 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.263846 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.263870 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:07Z","lastTransitionTime":"2025-11-26T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.276348 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.297022 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.329019 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"message\\\":\\\"openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc004647f7b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.4.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1126 13:16:05.866396 6398 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.346506 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.366850 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.367677 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.367737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.367754 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.367784 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.367801 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:07Z","lastTransitionTime":"2025-11-26T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.381959 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.400490 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.417294 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.450156 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.470835 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.470926 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.470943 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.470966 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.470984 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:07Z","lastTransitionTime":"2025-11-26T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.474836 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.496027 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.516681 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.533475 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.554978 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.572826 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.574461 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.574506 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.574526 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.574556 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.574578 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:07Z","lastTransitionTime":"2025-11-26T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.677605 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.677650 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.677666 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.677690 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.677708 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:07Z","lastTransitionTime":"2025-11-26T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.780692 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.780757 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.780774 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.780797 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.780814 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:07Z","lastTransitionTime":"2025-11-26T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.798386 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.798533 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:07 crc kubenswrapper[4747]: E1126 13:16:07.798722 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:07 crc kubenswrapper[4747]: E1126 13:16:07.798877 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.884159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.884218 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.884236 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.884263 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.884282 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:07Z","lastTransitionTime":"2025-11-26T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.987644 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.987718 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.987742 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.987771 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:07 crc kubenswrapper[4747]: I1126 13:16:07.987796 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:07Z","lastTransitionTime":"2025-11-26T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.091183 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.091241 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.091260 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.091283 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.091302 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:08Z","lastTransitionTime":"2025-11-26T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.194809 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.194877 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.194901 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.194944 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.194973 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:08Z","lastTransitionTime":"2025-11-26T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.297138 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.297203 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.297223 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.297250 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.297268 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:08Z","lastTransitionTime":"2025-11-26T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.400136 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.400192 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.400208 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.400230 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.400246 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:08Z","lastTransitionTime":"2025-11-26T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.503309 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.503362 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.503371 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.503385 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.503394 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:08Z","lastTransitionTime":"2025-11-26T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.606352 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.606405 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.606421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.606443 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.606461 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:08Z","lastTransitionTime":"2025-11-26T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.709797 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.709856 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.709870 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.709888 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.709899 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:08Z","lastTransitionTime":"2025-11-26T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.797734 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.797758 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:08 crc kubenswrapper[4747]: E1126 13:16:08.797928 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:08 crc kubenswrapper[4747]: E1126 13:16:08.797981 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.812031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.812116 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.812134 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.812158 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.812179 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:08Z","lastTransitionTime":"2025-11-26T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.915243 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.915313 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.915337 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.915365 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:08 crc kubenswrapper[4747]: I1126 13:16:08.915387 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:08Z","lastTransitionTime":"2025-11-26T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.017968 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.018019 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.018035 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.018098 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.018117 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:09Z","lastTransitionTime":"2025-11-26T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.120748 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.120816 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.120837 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.120865 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.120882 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:09Z","lastTransitionTime":"2025-11-26T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.223736 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.223792 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.223809 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.223835 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.223852 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:09Z","lastTransitionTime":"2025-11-26T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.327605 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.327897 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.327990 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.328135 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.328232 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:09Z","lastTransitionTime":"2025-11-26T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.431715 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.432133 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.432292 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.432434 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.432614 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:09Z","lastTransitionTime":"2025-11-26T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.535737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.535816 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.535843 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.535871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.535893 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:09Z","lastTransitionTime":"2025-11-26T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.639089 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.639175 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.639197 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.639222 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.639244 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:09Z","lastTransitionTime":"2025-11-26T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.742274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.742309 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.742317 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.742330 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.742339 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:09Z","lastTransitionTime":"2025-11-26T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.798151 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.798203 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:09 crc kubenswrapper[4747]: E1126 13:16:09.798362 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:09 crc kubenswrapper[4747]: E1126 13:16:09.798558 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.845149 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.845196 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.845210 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.845233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.845247 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:09Z","lastTransitionTime":"2025-11-26T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.947816 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.947898 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.947924 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.947954 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:09 crc kubenswrapper[4747]: I1126 13:16:09.947973 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:09Z","lastTransitionTime":"2025-11-26T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.050588 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.050684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.050709 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.050739 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.050765 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.153844 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.153905 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.153922 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.153944 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.153961 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.257037 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.257126 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.257143 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.257166 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.257184 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.360424 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.360497 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.360518 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.360550 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.360573 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.463900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.463941 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.463951 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.463968 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.463978 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.567851 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.567900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.567914 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.567934 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.567946 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.670015 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.670107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.670134 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.670163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.670186 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.773685 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.773777 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.773794 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.774280 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.774329 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.798032 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.798030 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:10 crc kubenswrapper[4747]: E1126 13:16:10.798244 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:10 crc kubenswrapper[4747]: E1126 13:16:10.798435 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.826134 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.826206 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.826228 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.826254 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.826272 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: E1126 13:16:10.846835 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.852540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.852594 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.852610 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.852633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.852651 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: E1126 13:16:10.873018 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.877843 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.877893 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.877910 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.877937 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.877954 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: E1126 13:16:10.898122 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.903646 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.903703 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.903722 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.903746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.903767 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: E1126 13:16:10.924196 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.928998 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.929047 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.929097 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.929130 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.929148 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:10 crc kubenswrapper[4747]: E1126 13:16:10.950413 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:10 crc kubenswrapper[4747]: E1126 13:16:10.950649 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.953115 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.953168 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.953184 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.953208 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:10 crc kubenswrapper[4747]: I1126 13:16:10.953225 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:10Z","lastTransitionTime":"2025-11-26T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.057381 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.057458 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.057480 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.057510 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.057541 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:11Z","lastTransitionTime":"2025-11-26T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.160813 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.160870 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.160887 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.160911 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.160931 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:11Z","lastTransitionTime":"2025-11-26T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.264445 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.264507 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.264524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.264547 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.264564 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:11Z","lastTransitionTime":"2025-11-26T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.367092 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.367155 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.367179 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.367206 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.367266 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:11Z","lastTransitionTime":"2025-11-26T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.470476 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.470549 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.470571 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.470598 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.470616 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:11Z","lastTransitionTime":"2025-11-26T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.572905 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.572968 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.572985 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.573016 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.573033 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:11Z","lastTransitionTime":"2025-11-26T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.675485 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.675545 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.675566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.675591 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.675614 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:11Z","lastTransitionTime":"2025-11-26T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.778378 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.778459 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.778482 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.778511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.778536 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:11Z","lastTransitionTime":"2025-11-26T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.798279 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.798339 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:11 crc kubenswrapper[4747]: E1126 13:16:11.798461 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:11 crc kubenswrapper[4747]: E1126 13:16:11.798593 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.881375 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.881441 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.881465 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.881495 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.881516 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:11Z","lastTransitionTime":"2025-11-26T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.984986 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.985040 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.985088 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.985110 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:11 crc kubenswrapper[4747]: I1126 13:16:11.985127 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:11Z","lastTransitionTime":"2025-11-26T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.087986 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.088044 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.088086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.088108 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.088127 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:12Z","lastTransitionTime":"2025-11-26T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.190475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.190533 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.190550 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.190574 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.190591 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:12Z","lastTransitionTime":"2025-11-26T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.293879 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.293929 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.293947 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.293971 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.293988 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:12Z","lastTransitionTime":"2025-11-26T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.397744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.397818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.397839 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.397866 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.397884 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:12Z","lastTransitionTime":"2025-11-26T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.500921 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.500983 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.501005 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.501031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.501092 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:12Z","lastTransitionTime":"2025-11-26T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.603300 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.603349 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.603366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.603389 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.603406 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:12Z","lastTransitionTime":"2025-11-26T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.705700 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.705763 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.705779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.705804 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.705855 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:12Z","lastTransitionTime":"2025-11-26T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.798097 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.798097 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:12 crc kubenswrapper[4747]: E1126 13:16:12.798345 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:12 crc kubenswrapper[4747]: E1126 13:16:12.798449 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.808481 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.808552 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.808575 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.808603 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.808629 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:12Z","lastTransitionTime":"2025-11-26T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.911653 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.911714 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.911738 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.911764 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:12 crc kubenswrapper[4747]: I1126 13:16:12.911784 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:12Z","lastTransitionTime":"2025-11-26T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.014628 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.014737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.014761 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.014793 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.014817 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:13Z","lastTransitionTime":"2025-11-26T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.117916 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.117989 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.118012 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.118042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.118269 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:13Z","lastTransitionTime":"2025-11-26T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.220792 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.220862 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.220885 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.220914 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.220936 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:13Z","lastTransitionTime":"2025-11-26T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.324148 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.324217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.324235 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.324261 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.324282 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:13Z","lastTransitionTime":"2025-11-26T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.427385 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.427453 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.427475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.427507 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.427531 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:13Z","lastTransitionTime":"2025-11-26T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.530961 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.531029 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.531097 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.531130 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.531151 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:13Z","lastTransitionTime":"2025-11-26T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.633253 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.633302 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.633322 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.633342 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.633354 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:13Z","lastTransitionTime":"2025-11-26T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.735966 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.736016 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.736027 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.736044 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.736074 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:13Z","lastTransitionTime":"2025-11-26T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.799703 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:13 crc kubenswrapper[4747]: E1126 13:16:13.800127 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.800701 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:13 crc kubenswrapper[4747]: E1126 13:16:13.800843 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.812257 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.829773 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.840176 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.840277 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.840330 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.840355 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.840371 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:13Z","lastTransitionTime":"2025-11-26T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.842447 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.853122 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.864227 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.877407 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.900125 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.915764 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.940955 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.942908 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.942963 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.942981 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.943007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.943026 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:13Z","lastTransitionTime":"2025-11-26T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.960548 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.974192 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:13 crc kubenswrapper[4747]: I1126 13:16:13.988033 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.003390 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:14Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.017773 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:14Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.043986 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"message\\\":\\\"openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc004647f7b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.4.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1126 13:16:05.866396 6398 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:14Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.045959 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.046033 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.046045 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.046132 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.046196 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:14Z","lastTransitionTime":"2025-11-26T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.061619 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:14Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.074101 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:14Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.087045 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:14Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.148398 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.148455 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.148466 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.148481 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.148491 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:14Z","lastTransitionTime":"2025-11-26T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.251147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.251219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.251239 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.251263 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.251286 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:14Z","lastTransitionTime":"2025-11-26T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.354132 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.354467 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.354484 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.354505 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.354521 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:14Z","lastTransitionTime":"2025-11-26T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.457697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.457773 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.457786 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.457840 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.457853 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:14Z","lastTransitionTime":"2025-11-26T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.560559 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.560617 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.560634 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.560657 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.560673 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:14Z","lastTransitionTime":"2025-11-26T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.663385 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.663455 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.663473 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.663496 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.663512 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:14Z","lastTransitionTime":"2025-11-26T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.766621 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.766684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.766706 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.766735 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.766757 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:14Z","lastTransitionTime":"2025-11-26T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.797929 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:14 crc kubenswrapper[4747]: E1126 13:16:14.798142 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.797926 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:14 crc kubenswrapper[4747]: E1126 13:16:14.798299 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.869451 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.869542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.869555 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.869581 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.869595 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:14Z","lastTransitionTime":"2025-11-26T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.996834 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.996887 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.996899 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.996921 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:14 crc kubenswrapper[4747]: I1126 13:16:14.996939 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:14Z","lastTransitionTime":"2025-11-26T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.099562 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.099659 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.099686 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.099717 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.099738 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:15Z","lastTransitionTime":"2025-11-26T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.202580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.202651 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.202669 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.202693 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.202711 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:15Z","lastTransitionTime":"2025-11-26T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.306439 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.306531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.306549 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.306577 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.306602 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:15Z","lastTransitionTime":"2025-11-26T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.409411 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.409506 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.409522 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.409545 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.409600 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:15Z","lastTransitionTime":"2025-11-26T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.513031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.513132 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.513150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.513175 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.513195 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:15Z","lastTransitionTime":"2025-11-26T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.616997 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.617142 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.617162 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.617231 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.617249 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:15Z","lastTransitionTime":"2025-11-26T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.721020 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.721099 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.721116 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.721142 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.721159 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:15Z","lastTransitionTime":"2025-11-26T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.797973 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:15 crc kubenswrapper[4747]: E1126 13:16:15.798226 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.798750 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:15 crc kubenswrapper[4747]: E1126 13:16:15.799007 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.824827 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.824886 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.824903 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.824927 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.824944 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:15Z","lastTransitionTime":"2025-11-26T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.927685 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.927729 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.927788 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.927809 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:15 crc kubenswrapper[4747]: I1126 13:16:15.927820 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:15Z","lastTransitionTime":"2025-11-26T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.082543 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.082595 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.082611 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.082631 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.082646 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:16Z","lastTransitionTime":"2025-11-26T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.185364 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.185421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.185436 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.185455 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.185474 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:16Z","lastTransitionTime":"2025-11-26T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.288028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.288107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.288128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.288150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.288164 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:16Z","lastTransitionTime":"2025-11-26T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.391572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.391618 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.391634 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.391659 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.391676 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:16Z","lastTransitionTime":"2025-11-26T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.494308 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.494398 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.494422 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.494453 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.494476 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:16Z","lastTransitionTime":"2025-11-26T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.597862 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.597915 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.597932 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.597954 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.597972 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:16Z","lastTransitionTime":"2025-11-26T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.701008 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.701114 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.701140 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.701169 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.701200 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:16Z","lastTransitionTime":"2025-11-26T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.797287 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.797354 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:16 crc kubenswrapper[4747]: E1126 13:16:16.797489 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:16 crc kubenswrapper[4747]: E1126 13:16:16.797658 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.805421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.805488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.805508 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.805693 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.805724 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:16Z","lastTransitionTime":"2025-11-26T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.909029 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.909116 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.909133 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.909157 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:16 crc kubenswrapper[4747]: I1126 13:16:16.909176 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:16Z","lastTransitionTime":"2025-11-26T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.011710 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.011753 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.011768 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.011789 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.011804 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:17Z","lastTransitionTime":"2025-11-26T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.115245 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.115319 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.115336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.115388 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.115406 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:17Z","lastTransitionTime":"2025-11-26T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.219443 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.219525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.219545 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.219597 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.219616 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:17Z","lastTransitionTime":"2025-11-26T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.323955 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.324026 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.324044 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.324107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.324126 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:17Z","lastTransitionTime":"2025-11-26T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.427788 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.427845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.427862 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.427887 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.427904 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:17Z","lastTransitionTime":"2025-11-26T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.530692 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.530759 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.530811 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.530840 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.530859 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:17Z","lastTransitionTime":"2025-11-26T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.633661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.633733 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.633757 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.633784 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.633803 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:17Z","lastTransitionTime":"2025-11-26T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.737201 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.737274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.737294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.737318 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.737336 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:17Z","lastTransitionTime":"2025-11-26T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.797941 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.797986 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:17 crc kubenswrapper[4747]: E1126 13:16:17.798181 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:17 crc kubenswrapper[4747]: E1126 13:16:17.798405 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.840464 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.840535 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.840557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.840577 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.840593 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:17Z","lastTransitionTime":"2025-11-26T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.943407 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.943457 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.943475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.943497 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:17 crc kubenswrapper[4747]: I1126 13:16:17.943515 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:17Z","lastTransitionTime":"2025-11-26T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.046174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.046247 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.046270 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.046301 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.046325 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:18Z","lastTransitionTime":"2025-11-26T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.149721 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.149783 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.149807 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.149842 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.149867 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:18Z","lastTransitionTime":"2025-11-26T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.253236 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.253273 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.253284 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.253299 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.253309 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:18Z","lastTransitionTime":"2025-11-26T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.356357 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.356417 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.356427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.356449 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.356461 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:18Z","lastTransitionTime":"2025-11-26T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.460381 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.460440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.460465 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.460492 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.460516 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:18Z","lastTransitionTime":"2025-11-26T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.564018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.564073 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.564086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.564099 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.564108 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:18Z","lastTransitionTime":"2025-11-26T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.666574 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.666600 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.666608 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.666620 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.666628 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:18Z","lastTransitionTime":"2025-11-26T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.769180 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.769228 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.769244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.769264 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.769280 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:18Z","lastTransitionTime":"2025-11-26T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.797811 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:18 crc kubenswrapper[4747]: E1126 13:16:18.797925 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.797817 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:18 crc kubenswrapper[4747]: E1126 13:16:18.798112 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.874786 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.874823 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.874833 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.874853 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.874862 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:18Z","lastTransitionTime":"2025-11-26T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.978129 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.978181 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.978193 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.978208 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:18 crc kubenswrapper[4747]: I1126 13:16:18.978250 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:18Z","lastTransitionTime":"2025-11-26T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.081306 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.081339 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.081350 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.081366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.081378 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:19Z","lastTransitionTime":"2025-11-26T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.184640 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.184693 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.184709 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.184731 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.184751 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:19Z","lastTransitionTime":"2025-11-26T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.287192 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.287524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.287648 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.287818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.287933 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:19Z","lastTransitionTime":"2025-11-26T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.390735 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.390799 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.390825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.390855 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.390877 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:19Z","lastTransitionTime":"2025-11-26T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.493728 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.493803 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.493816 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.493834 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.493845 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:19Z","lastTransitionTime":"2025-11-26T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.596383 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.596687 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.596812 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.596889 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.596976 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:19Z","lastTransitionTime":"2025-11-26T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.700164 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.700216 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.700227 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.700244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.700256 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:19Z","lastTransitionTime":"2025-11-26T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.798300 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.798449 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:19 crc kubenswrapper[4747]: E1126 13:16:19.798576 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:19 crc kubenswrapper[4747]: E1126 13:16:19.798795 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.802548 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.802648 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.802718 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.802782 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.802841 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:19Z","lastTransitionTime":"2025-11-26T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.812533 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.905727 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.905781 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.905798 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.905824 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:19 crc kubenswrapper[4747]: I1126 13:16:19.905842 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:19Z","lastTransitionTime":"2025-11-26T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.009198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.009523 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.009659 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.009795 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.009958 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.113031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.114094 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.114239 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.114378 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.114534 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.217144 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.217485 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.221246 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.221336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.221356 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.324778 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.324819 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.324831 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.324845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.324854 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.427214 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.427256 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.427266 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.427281 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.427292 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.530236 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.530297 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.530316 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.530344 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.530364 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.632946 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.633018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.633041 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.633103 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.633127 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.735754 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.735855 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.735874 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.735896 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.735913 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.798046 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:20 crc kubenswrapper[4747]: E1126 13:16:20.798401 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.798199 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:20 crc kubenswrapper[4747]: E1126 13:16:20.798675 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.843408 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.843494 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.843519 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.843557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.843593 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.946552 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.946612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.946630 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.946654 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.946671 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.967806 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.967842 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.967851 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.967866 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.967875 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:20 crc kubenswrapper[4747]: E1126 13:16:20.982359 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:20Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.986564 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.986813 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.986953 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.987112 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:20 crc kubenswrapper[4747]: I1126 13:16:20.987237 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:20Z","lastTransitionTime":"2025-11-26T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: E1126 13:16:21.001421 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:20Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.004813 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.004883 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.004902 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.004925 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.004942 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: E1126 13:16:21.018853 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:21Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.023023 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.023121 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.023181 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.023297 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.023384 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: E1126 13:16:21.038630 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:21Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.041537 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.041567 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.041576 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.041590 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.041599 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: E1126 13:16:21.059338 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:21Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:21 crc kubenswrapper[4747]: E1126 13:16:21.059553 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.060736 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.060798 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.060815 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.060836 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.060852 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.163611 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.163668 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.163687 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.163712 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.163729 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.266191 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.266385 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.266445 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.266507 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.266570 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.339940 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:21 crc kubenswrapper[4747]: E1126 13:16:21.340385 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:16:21 crc kubenswrapper[4747]: E1126 13:16:21.340596 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs podName:67391449-89bb-423a-b690-2f60a43ccfad nodeName:}" failed. No retries permitted until 2025-11-26 13:16:53.340568124 +0000 UTC m=+100.326879219 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs") pod "network-metrics-daemon-6zzh7" (UID: "67391449-89bb-423a-b690-2f60a43ccfad") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.369808 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.369844 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.369853 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.369868 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.369878 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.472524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.472571 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.472586 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.472604 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.472616 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.575007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.575075 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.575084 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.575098 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.575119 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.677294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.677355 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.677380 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.677410 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.677433 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.780009 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.780094 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.780112 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.780140 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.780163 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.798447 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:21 crc kubenswrapper[4747]: E1126 13:16:21.798552 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.798701 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:21 crc kubenswrapper[4747]: E1126 13:16:21.798748 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.882389 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.882431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.882440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.882460 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.882473 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.984511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.984542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.984551 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.984563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:21 crc kubenswrapper[4747]: I1126 13:16:21.984573 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:21Z","lastTransitionTime":"2025-11-26T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.086897 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.087033 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.087104 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.087141 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.087202 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:22Z","lastTransitionTime":"2025-11-26T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.190689 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.190767 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.190786 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.190814 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.190832 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:22Z","lastTransitionTime":"2025-11-26T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.292764 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.292800 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.292809 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.292824 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.292834 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:22Z","lastTransitionTime":"2025-11-26T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.395998 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.396047 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.396248 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.396268 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.396279 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:22Z","lastTransitionTime":"2025-11-26T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.499104 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.499149 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.499161 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.499181 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.499195 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:22Z","lastTransitionTime":"2025-11-26T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.601919 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.602018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.602032 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.602078 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.602096 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:22Z","lastTransitionTime":"2025-11-26T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.703911 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.703988 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.704006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.704038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.704104 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:22Z","lastTransitionTime":"2025-11-26T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.797947 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.798040 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:22 crc kubenswrapper[4747]: E1126 13:16:22.798167 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:22 crc kubenswrapper[4747]: E1126 13:16:22.798281 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.799046 4747 scope.go:117] "RemoveContainer" containerID="a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22" Nov 26 13:16:22 crc kubenswrapper[4747]: E1126 13:16:22.799237 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.806933 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.806964 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.806973 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.806987 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.806996 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:22Z","lastTransitionTime":"2025-11-26T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.910075 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.910126 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.910138 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.910156 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:22 crc kubenswrapper[4747]: I1126 13:16:22.910177 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:22Z","lastTransitionTime":"2025-11-26T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.011668 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.011805 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.011818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.011835 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.011846 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:23Z","lastTransitionTime":"2025-11-26T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.114515 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.114687 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.114704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.114724 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.114741 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:23Z","lastTransitionTime":"2025-11-26T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.217448 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.217499 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.217513 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.217531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.217544 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:23Z","lastTransitionTime":"2025-11-26T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.257243 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/0.log" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.257324 4747 generic.go:334] "Generic (PLEG): container finished" podID="aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1" containerID="eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d" exitCode=1 Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.257374 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lb7jc" event={"ID":"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1","Type":"ContainerDied","Data":"eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.258016 4747 scope.go:117] "RemoveContainer" containerID="eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.281663 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.295578 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.312125 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.319114 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.319137 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.319145 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.319157 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.319166 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:23Z","lastTransitionTime":"2025-11-26T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.339580 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:22Z\\\",\\\"message\\\":\\\"2025-11-26T13:15:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339\\\\n2025-11-26T13:15:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339 to /host/opt/cni/bin/\\\\n2025-11-26T13:15:37Z [verbose] multus-daemon started\\\\n2025-11-26T13:15:37Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:16:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.376419 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.399257 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.420833 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.420891 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.420908 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.420933 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.420950 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:23Z","lastTransitionTime":"2025-11-26T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.424330 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.440526 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.453614 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.466947 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.477672 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a18a65ca-3552-42bc-84e4-e89c5c35bc1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce4876cd230438c597828edab632e809c77ee13d7e9bb226953e86c84043555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.491421 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.513537 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"message\\\":\\\"openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc004647f7b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.4.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1126 13:16:05.866396 6398 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.523571 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.523604 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.523615 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.523630 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.523642 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:23Z","lastTransitionTime":"2025-11-26T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.529484 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.546997 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.559520 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.573903 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.586890 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.604980 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.625871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.625901 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.625909 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.625922 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.625931 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:23Z","lastTransitionTime":"2025-11-26T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.729133 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.729189 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.729199 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.729213 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.729221 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:23Z","lastTransitionTime":"2025-11-26T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.797336 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.797408 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:23 crc kubenswrapper[4747]: E1126 13:16:23.797551 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:23 crc kubenswrapper[4747]: E1126 13:16:23.797670 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.825174 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.832350 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.832396 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.832407 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.832427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.832485 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:23Z","lastTransitionTime":"2025-11-26T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.844201 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.862185 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.875174 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.889136 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.922745 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.935917 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.935980 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.936001 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.936024 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.936041 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:23Z","lastTransitionTime":"2025-11-26T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.939871 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.951871 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.965516 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.976404 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:22Z\\\",\\\"message\\\":\\\"2025-11-26T13:15:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339\\\\n2025-11-26T13:15:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339 to /host/opt/cni/bin/\\\\n2025-11-26T13:15:37Z [verbose] multus-daemon started\\\\n2025-11-26T13:15:37Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:16:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:23 crc kubenswrapper[4747]: I1126 13:16:23.995444 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.005795 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.016670 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.025526 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.037487 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.038992 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.039020 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.039028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.039043 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.039067 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:24Z","lastTransitionTime":"2025-11-26T13:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.050703 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.058717 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a18a65ca-3552-42bc-84e4-e89c5c35bc1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce4876cd230438c597828edab632e809c77ee13d7e9bb226953e86c84043555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.073674 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.094727 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"message\\\":\\\"openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc004647f7b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.4.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1126 13:16:05.866396 6398 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.141201 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.141242 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.141252 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.141268 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.141277 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:24Z","lastTransitionTime":"2025-11-26T13:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.243390 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.243434 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.243447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.243467 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.243478 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:24Z","lastTransitionTime":"2025-11-26T13:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.263033 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/0.log" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.263102 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lb7jc" event={"ID":"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1","Type":"ContainerStarted","Data":"a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e"} Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.279521 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.291179 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.309145 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.325904 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.338426 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a18a65ca-3552-42bc-84e4-e89c5c35bc1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce4876cd230438c597828edab632e809c77ee13d7e9bb226953e86c84043555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.345325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.345353 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.345363 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.345377 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.345388 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:24Z","lastTransitionTime":"2025-11-26T13:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.354223 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.371863 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"message\\\":\\\"openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc004647f7b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.4.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1126 13:16:05.866396 6398 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.382876 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.398010 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.416088 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.426657 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.437645 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.448454 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.448509 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.448520 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.448536 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.448547 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:24Z","lastTransitionTime":"2025-11-26T13:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.450122 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.469669 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.484887 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.501968 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.519166 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.536495 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:22Z\\\",\\\"message\\\":\\\"2025-11-26T13:15:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339\\\\n2025-11-26T13:15:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339 to /host/opt/cni/bin/\\\\n2025-11-26T13:15:37Z [verbose] multus-daemon started\\\\n2025-11-26T13:15:37Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:16:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.551124 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.551165 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.551174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.551189 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.551200 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:24Z","lastTransitionTime":"2025-11-26T13:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.552582 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.653556 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.653614 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.653627 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.653648 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.653665 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:24Z","lastTransitionTime":"2025-11-26T13:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.757041 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.757880 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.758035 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.758210 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.758378 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:24Z","lastTransitionTime":"2025-11-26T13:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.797749 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:24 crc kubenswrapper[4747]: E1126 13:16:24.797885 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.797763 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:24 crc kubenswrapper[4747]: E1126 13:16:24.798128 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.862443 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.862513 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.862532 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.862557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.862576 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:24Z","lastTransitionTime":"2025-11-26T13:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.965117 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.965191 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.965215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.965294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:24 crc kubenswrapper[4747]: I1126 13:16:24.965325 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:24Z","lastTransitionTime":"2025-11-26T13:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.068824 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.068906 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.068930 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.068960 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.068977 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:25Z","lastTransitionTime":"2025-11-26T13:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.172000 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.172113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.172132 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.172163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.172185 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:25Z","lastTransitionTime":"2025-11-26T13:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.274531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.274599 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.274611 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.274631 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.274666 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:25Z","lastTransitionTime":"2025-11-26T13:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.377875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.377917 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.377929 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.377947 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.377959 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:25Z","lastTransitionTime":"2025-11-26T13:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.481470 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.481517 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.481525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.481545 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.481556 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:25Z","lastTransitionTime":"2025-11-26T13:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.584002 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.584096 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.584110 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.584131 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.584142 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:25Z","lastTransitionTime":"2025-11-26T13:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.686716 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.686783 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.686800 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.686828 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.686845 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:25Z","lastTransitionTime":"2025-11-26T13:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.790626 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.790686 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.790706 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.790745 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.790763 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:25Z","lastTransitionTime":"2025-11-26T13:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.797341 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.797445 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:25 crc kubenswrapper[4747]: E1126 13:16:25.797516 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:25 crc kubenswrapper[4747]: E1126 13:16:25.797675 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.894226 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.894289 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.894299 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.894325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.894336 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:25Z","lastTransitionTime":"2025-11-26T13:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.998955 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.999003 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.999020 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.999042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:25 crc kubenswrapper[4747]: I1126 13:16:25.999083 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:25Z","lastTransitionTime":"2025-11-26T13:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.101462 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.101532 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.101542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.101566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.101581 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:26Z","lastTransitionTime":"2025-11-26T13:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.204185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.204278 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.204291 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.204311 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.204325 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:26Z","lastTransitionTime":"2025-11-26T13:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.306706 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.306768 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.306804 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.306834 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.306855 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:26Z","lastTransitionTime":"2025-11-26T13:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.409716 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.409766 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.409783 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.409808 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.409827 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:26Z","lastTransitionTime":"2025-11-26T13:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.512282 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.512339 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.512360 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.512389 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.512409 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:26Z","lastTransitionTime":"2025-11-26T13:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.615150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.615238 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.615258 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.615330 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.615354 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:26Z","lastTransitionTime":"2025-11-26T13:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.717943 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.718006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.718030 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.718093 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.718119 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:26Z","lastTransitionTime":"2025-11-26T13:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.798159 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.798192 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:26 crc kubenswrapper[4747]: E1126 13:16:26.798408 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:26 crc kubenswrapper[4747]: E1126 13:16:26.798562 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.821029 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.821083 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.821094 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.821110 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.821123 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:26Z","lastTransitionTime":"2025-11-26T13:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.924900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.924961 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.924985 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.925012 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:26 crc kubenswrapper[4747]: I1126 13:16:26.925032 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:26Z","lastTransitionTime":"2025-11-26T13:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.027391 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.027465 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.027485 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.027514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.027539 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:27Z","lastTransitionTime":"2025-11-26T13:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.130566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.130618 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.130635 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.130660 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.130677 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:27Z","lastTransitionTime":"2025-11-26T13:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.234050 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.234153 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.234170 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.234194 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.234211 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:27Z","lastTransitionTime":"2025-11-26T13:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.336717 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.336762 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.336773 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.336792 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.336805 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:27Z","lastTransitionTime":"2025-11-26T13:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.438889 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.438930 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.438940 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.438977 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.438988 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:27Z","lastTransitionTime":"2025-11-26T13:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.541755 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.541817 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.541834 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.541860 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.541878 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:27Z","lastTransitionTime":"2025-11-26T13:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.645291 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.645389 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.645415 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.645452 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.645476 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:27Z","lastTransitionTime":"2025-11-26T13:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.748790 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.748877 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.748891 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.748919 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.748941 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:27Z","lastTransitionTime":"2025-11-26T13:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.797550 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.797672 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:27 crc kubenswrapper[4747]: E1126 13:16:27.797761 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:27 crc kubenswrapper[4747]: E1126 13:16:27.797878 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.851629 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.851690 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.851707 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.851730 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.851750 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:27Z","lastTransitionTime":"2025-11-26T13:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.954867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.954917 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.954934 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.954959 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:27 crc kubenswrapper[4747]: I1126 13:16:27.954975 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:27Z","lastTransitionTime":"2025-11-26T13:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.058340 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.058408 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.058427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.058452 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.058469 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:28Z","lastTransitionTime":"2025-11-26T13:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.161957 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.162086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.162113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.162157 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.162181 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:28Z","lastTransitionTime":"2025-11-26T13:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.266406 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.266464 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.266483 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.266509 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.266527 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:28Z","lastTransitionTime":"2025-11-26T13:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.370460 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.370574 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.370637 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.370671 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.370725 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:28Z","lastTransitionTime":"2025-11-26T13:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.474326 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.474386 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.474410 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.474440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.474462 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:28Z","lastTransitionTime":"2025-11-26T13:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.577684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.577753 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.577772 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.577794 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.577811 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:28Z","lastTransitionTime":"2025-11-26T13:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.680467 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.680511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.680528 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.680551 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.680567 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:28Z","lastTransitionTime":"2025-11-26T13:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.783966 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.784036 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.784045 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.784089 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.784099 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:28Z","lastTransitionTime":"2025-11-26T13:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.797635 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.797685 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:28 crc kubenswrapper[4747]: E1126 13:16:28.797831 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:28 crc kubenswrapper[4747]: E1126 13:16:28.798017 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.886734 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.886806 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.886828 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.886868 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.886892 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:28Z","lastTransitionTime":"2025-11-26T13:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.990720 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.990780 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.990794 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.990816 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:28 crc kubenswrapper[4747]: I1126 13:16:28.990829 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:28Z","lastTransitionTime":"2025-11-26T13:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.095107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.095171 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.095191 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.095217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.095235 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:29Z","lastTransitionTime":"2025-11-26T13:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.198006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.198101 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.198121 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.198148 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.198165 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:29Z","lastTransitionTime":"2025-11-26T13:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.301576 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.301659 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.301674 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.301694 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.301706 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:29Z","lastTransitionTime":"2025-11-26T13:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.405882 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.405948 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.405965 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.405989 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.406122 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:29Z","lastTransitionTime":"2025-11-26T13:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.510332 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.510399 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.510420 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.510444 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.510461 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:29Z","lastTransitionTime":"2025-11-26T13:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.613723 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.613786 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.613823 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.613848 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.613865 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:29Z","lastTransitionTime":"2025-11-26T13:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.717638 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.717693 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.717710 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.717785 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.717827 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:29Z","lastTransitionTime":"2025-11-26T13:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.797712 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.797846 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:29 crc kubenswrapper[4747]: E1126 13:16:29.798021 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:29 crc kubenswrapper[4747]: E1126 13:16:29.798465 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.820759 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.820806 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.820825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.820846 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.820862 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:29Z","lastTransitionTime":"2025-11-26T13:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.923939 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.923998 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.924020 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.924044 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:29 crc kubenswrapper[4747]: I1126 13:16:29.924100 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:29Z","lastTransitionTime":"2025-11-26T13:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.026905 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.026956 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.026972 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.026994 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.027011 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:30Z","lastTransitionTime":"2025-11-26T13:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.130426 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.130707 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.130834 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.130924 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.131010 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:30Z","lastTransitionTime":"2025-11-26T13:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.235163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.235219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.235241 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.235269 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.235289 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:30Z","lastTransitionTime":"2025-11-26T13:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.338268 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.338366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.338425 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.338451 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.338506 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:30Z","lastTransitionTime":"2025-11-26T13:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.441718 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.441768 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.441785 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.441807 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.441824 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:30Z","lastTransitionTime":"2025-11-26T13:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.544664 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.544738 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.544758 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.544782 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.544799 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:30Z","lastTransitionTime":"2025-11-26T13:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.647565 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.647703 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.647733 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.647766 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.647793 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:30Z","lastTransitionTime":"2025-11-26T13:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.750995 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.751048 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.751093 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.751115 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.751134 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:30Z","lastTransitionTime":"2025-11-26T13:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.798028 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.798028 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:30 crc kubenswrapper[4747]: E1126 13:16:30.798420 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:30 crc kubenswrapper[4747]: E1126 13:16:30.798252 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.854733 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.855006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.855330 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.855615 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.855822 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:30Z","lastTransitionTime":"2025-11-26T13:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.958610 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.958661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.958679 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.958704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:30 crc kubenswrapper[4747]: I1126 13:16:30.958723 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:30Z","lastTransitionTime":"2025-11-26T13:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.061989 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.062049 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.062103 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.062125 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.062142 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.164697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.164753 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.164775 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.164802 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.164825 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.267359 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.267409 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.267427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.267450 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.267467 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.328731 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.329136 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.329349 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.329580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.329770 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: E1126 13:16:31.351134 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.357007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.357273 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.357415 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.357564 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.357699 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: E1126 13:16:31.377110 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.383150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.383199 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.383215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.383243 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.383262 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: E1126 13:16:31.403805 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.409589 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.409651 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.409673 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.409701 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.409721 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: E1126 13:16:31.429386 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.434592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.434641 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.434657 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.434679 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.434696 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: E1126 13:16:31.453852 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:31 crc kubenswrapper[4747]: E1126 13:16:31.454111 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.456760 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.456818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.456838 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.456862 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.456879 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.559593 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.559645 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.559668 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.559702 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.559740 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.663113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.663181 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.663203 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.663233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.663256 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.766159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.766288 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.766314 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.766344 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.766363 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.797867 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.797920 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:31 crc kubenswrapper[4747]: E1126 13:16:31.798104 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:31 crc kubenswrapper[4747]: E1126 13:16:31.798223 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.869507 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.869563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.869580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.869602 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.869619 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.972897 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.972961 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.972985 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.973014 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:31 crc kubenswrapper[4747]: I1126 13:16:31.973037 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:31Z","lastTransitionTime":"2025-11-26T13:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.075915 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.075983 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.075999 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.076026 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.076047 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:32Z","lastTransitionTime":"2025-11-26T13:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.178940 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.179003 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.179020 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.179046 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.179093 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:32Z","lastTransitionTime":"2025-11-26T13:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.282670 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.283207 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.283364 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.283513 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.283676 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:32Z","lastTransitionTime":"2025-11-26T13:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.386484 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.386532 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.386550 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.386573 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.386590 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:32Z","lastTransitionTime":"2025-11-26T13:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.489902 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.489961 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.489978 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.490002 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.490019 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:32Z","lastTransitionTime":"2025-11-26T13:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.592757 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.592871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.592889 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.592912 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.592929 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:32Z","lastTransitionTime":"2025-11-26T13:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.696104 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.696153 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.696203 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.696226 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.696243 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:32Z","lastTransitionTime":"2025-11-26T13:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.797328 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:32 crc kubenswrapper[4747]: E1126 13:16:32.797557 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.797781 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:32 crc kubenswrapper[4747]: E1126 13:16:32.797921 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.799149 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.799253 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.799280 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.799309 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.799333 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:32Z","lastTransitionTime":"2025-11-26T13:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.902937 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.902998 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.903016 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.903041 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:32 crc kubenswrapper[4747]: I1126 13:16:32.903093 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:32Z","lastTransitionTime":"2025-11-26T13:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.005767 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.005814 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.005828 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.005847 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.005861 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:33Z","lastTransitionTime":"2025-11-26T13:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.108896 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.108995 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.109044 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.109123 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.109149 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:33Z","lastTransitionTime":"2025-11-26T13:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.212161 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.212225 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.212244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.212268 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.212286 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:33Z","lastTransitionTime":"2025-11-26T13:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.315565 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.315635 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.315653 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.315677 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.315694 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:33Z","lastTransitionTime":"2025-11-26T13:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.419009 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.419091 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.419108 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.419131 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.419148 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:33Z","lastTransitionTime":"2025-11-26T13:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.522260 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.522333 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.522357 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.522389 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.522411 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:33Z","lastTransitionTime":"2025-11-26T13:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.626156 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.626219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.626236 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.626260 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.626280 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:33Z","lastTransitionTime":"2025-11-26T13:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.729613 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.729669 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.729695 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.729721 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.729738 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:33Z","lastTransitionTime":"2025-11-26T13:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.797828 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:33 crc kubenswrapper[4747]: E1126 13:16:33.798089 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.798261 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:33 crc kubenswrapper[4747]: E1126 13:16:33.798715 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.822574 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.833519 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.833579 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.833597 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.833621 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.833638 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:33Z","lastTransitionTime":"2025-11-26T13:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.845665 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.865170 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.890509 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.907458 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.938294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.938382 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.938400 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.938460 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.938480 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:33Z","lastTransitionTime":"2025-11-26T13:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.943657 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.964719 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:33 crc kubenswrapper[4747]: I1126 13:16:33.984442 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.004003 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.025149 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:22Z\\\",\\\"message\\\":\\\"2025-11-26T13:15:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339\\\\n2025-11-26T13:15:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339 to /host/opt/cni/bin/\\\\n2025-11-26T13:15:37Z [verbose] multus-daemon started\\\\n2025-11-26T13:15:37Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:16:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.042104 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.042248 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.042271 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.042296 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.042314 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:34Z","lastTransitionTime":"2025-11-26T13:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.048436 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.066294 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.084881 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.106115 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.124576 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.145576 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.149148 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.149203 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.149225 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.149254 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:34Z","lastTransitionTime":"2025-11-26T13:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.149934 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.167183 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a18a65ca-3552-42bc-84e4-e89c5c35bc1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce4876cd230438c597828edab632e809c77ee13d7e9bb226953e86c84043555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.183189 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.205624 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"message\\\":\\\"openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc004647f7b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.4.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1126 13:16:05.866396 6398 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:34Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.251996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.252135 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.252165 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.252193 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.252218 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:34Z","lastTransitionTime":"2025-11-26T13:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.357915 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.358004 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.358030 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.358096 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.358129 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:34Z","lastTransitionTime":"2025-11-26T13:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.462480 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.462552 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.462570 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.462600 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.462620 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:34Z","lastTransitionTime":"2025-11-26T13:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.565621 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.565707 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.565726 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.565752 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.565770 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:34Z","lastTransitionTime":"2025-11-26T13:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.668490 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.668553 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.668575 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.668607 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.668632 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:34Z","lastTransitionTime":"2025-11-26T13:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.770774 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.770829 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.770846 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.770865 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.770878 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:34Z","lastTransitionTime":"2025-11-26T13:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.797952 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.797990 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:34 crc kubenswrapper[4747]: E1126 13:16:34.798118 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:34 crc kubenswrapper[4747]: E1126 13:16:34.798251 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.873580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.873654 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.873671 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.873692 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.873708 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:34Z","lastTransitionTime":"2025-11-26T13:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.977619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.977693 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.977715 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.977744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:34 crc kubenswrapper[4747]: I1126 13:16:34.977761 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:34Z","lastTransitionTime":"2025-11-26T13:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.080651 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.080691 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.080702 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.080721 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.080732 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:35Z","lastTransitionTime":"2025-11-26T13:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.183928 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.183991 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.184007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.184098 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.184118 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:35Z","lastTransitionTime":"2025-11-26T13:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.287180 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.287275 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.287293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.287316 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.287335 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:35Z","lastTransitionTime":"2025-11-26T13:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.389667 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.389744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.389767 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.389796 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.389821 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:35Z","lastTransitionTime":"2025-11-26T13:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.492620 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.492729 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.492754 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.492785 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.492806 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:35Z","lastTransitionTime":"2025-11-26T13:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.596445 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.596511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.596529 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.596555 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.596574 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:35Z","lastTransitionTime":"2025-11-26T13:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.699732 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.699802 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.699825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.699854 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.699878 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:35Z","lastTransitionTime":"2025-11-26T13:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.797308 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.797335 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:35 crc kubenswrapper[4747]: E1126 13:16:35.797488 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:35 crc kubenswrapper[4747]: E1126 13:16:35.797584 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.803689 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.803746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.803764 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.803786 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.803804 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:35Z","lastTransitionTime":"2025-11-26T13:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.906471 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.906547 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.906570 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.906599 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:35 crc kubenswrapper[4747]: I1126 13:16:35.906621 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:35Z","lastTransitionTime":"2025-11-26T13:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.009969 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.010021 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.010039 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.010096 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.010127 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:36Z","lastTransitionTime":"2025-11-26T13:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.113586 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.113649 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.113666 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.113691 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.113710 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:36Z","lastTransitionTime":"2025-11-26T13:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.216645 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.216711 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.216730 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.216755 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.216773 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:36Z","lastTransitionTime":"2025-11-26T13:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.319038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.319100 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.319111 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.319134 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.319146 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:36Z","lastTransitionTime":"2025-11-26T13:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.421864 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.421926 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.421942 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.421970 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.421989 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:36Z","lastTransitionTime":"2025-11-26T13:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.525630 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.525690 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.525708 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.525731 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.525748 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:36Z","lastTransitionTime":"2025-11-26T13:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.628746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.628794 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.628810 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.628832 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.628849 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:36Z","lastTransitionTime":"2025-11-26T13:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.731969 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.732027 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.732045 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.732113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.732134 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:36Z","lastTransitionTime":"2025-11-26T13:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.798262 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.798370 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:36 crc kubenswrapper[4747]: E1126 13:16:36.798454 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:36 crc kubenswrapper[4747]: E1126 13:16:36.798668 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.799741 4747 scope.go:117] "RemoveContainer" containerID="a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.836627 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.836677 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.836693 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.836715 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.836732 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:36Z","lastTransitionTime":"2025-11-26T13:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.939653 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.939721 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.939740 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.939768 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:36 crc kubenswrapper[4747]: I1126 13:16:36.939786 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:36Z","lastTransitionTime":"2025-11-26T13:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.042911 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.042972 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.042985 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.043025 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.043039 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:37Z","lastTransitionTime":"2025-11-26T13:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.146564 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.146620 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.146636 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.146660 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.146679 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:37Z","lastTransitionTime":"2025-11-26T13:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.249641 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.249705 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.249722 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.249747 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.249772 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:37Z","lastTransitionTime":"2025-11-26T13:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.314923 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/2.log" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.318006 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.332299 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a18a65ca-3552-42bc-84e4-e89c5c35bc1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce4876cd230438c597828edab632e809c77ee13d7e9bb226953e86c84043555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.352211 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.352255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.352270 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.352293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.352308 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:37Z","lastTransitionTime":"2025-11-26T13:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.356188 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.380965 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"message\\\":\\\"openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc004647f7b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.4.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1126 13:16:05.866396 6398 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.394974 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.407341 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.418935 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.419105 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.419162 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.419287 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.41924313 +0000 UTC m=+148.405554195 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.419317 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.419347 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.419363 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.419415 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.419400734 +0000 UTC m=+148.405711749 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.419546 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.419674 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.419753 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.419786 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.419758943 +0000 UTC m=+148.406069978 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.419932 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.420044 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.420017429 +0000 UTC m=+148.406328484 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.420119 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.420150 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.420174 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.420232 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.420216334 +0000 UTC m=+148.406527549 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.425379 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.439184 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.449764 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.455527 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.455587 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.455601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.455624 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.455646 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:37Z","lastTransitionTime":"2025-11-26T13:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.468387 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.481968 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.496848 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.508654 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.528804 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:22Z\\\",\\\"message\\\":\\\"2025-11-26T13:15:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339\\\\n2025-11-26T13:15:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339 to /host/opt/cni/bin/\\\\n2025-11-26T13:15:37Z [verbose] multus-daemon started\\\\n2025-11-26T13:15:37Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:16:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.544114 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.558474 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.558602 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.558663 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.558745 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.558825 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:37Z","lastTransitionTime":"2025-11-26T13:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.561937 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.580829 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.600836 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.618215 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.632311 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.661672 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.661721 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.661741 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.661766 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.661783 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:37Z","lastTransitionTime":"2025-11-26T13:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.764307 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.764356 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.764376 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.764399 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.764420 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:37Z","lastTransitionTime":"2025-11-26T13:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.798267 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.798659 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.798879 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:37 crc kubenswrapper[4747]: E1126 13:16:37.799207 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.866937 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.867312 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.867506 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.867714 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.867907 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:37Z","lastTransitionTime":"2025-11-26T13:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.970421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.970481 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.970497 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.970524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:37 crc kubenswrapper[4747]: I1126 13:16:37.970542 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:37Z","lastTransitionTime":"2025-11-26T13:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.073851 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.073906 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.073929 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.073958 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.073979 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:38Z","lastTransitionTime":"2025-11-26T13:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.177511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.177574 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.177591 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.177659 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.177680 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:38Z","lastTransitionTime":"2025-11-26T13:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.280739 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.280803 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.280820 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.280847 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.280867 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:38Z","lastTransitionTime":"2025-11-26T13:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.324903 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/3.log" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.326180 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/2.log" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.330831 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" exitCode=1 Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.330887 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75"} Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.330937 4747 scope.go:117] "RemoveContainer" containerID="a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.334265 4747 scope.go:117] "RemoveContainer" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:16:38 crc kubenswrapper[4747]: E1126 13:16:38.334774 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.366286 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.383999 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.384038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.384071 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.384089 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.384102 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:38Z","lastTransitionTime":"2025-11-26T13:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.390296 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.411974 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.429740 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.449826 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:22Z\\\",\\\"message\\\":\\\"2025-11-26T13:15:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339\\\\n2025-11-26T13:15:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339 to /host/opt/cni/bin/\\\\n2025-11-26T13:15:37Z [verbose] multus-daemon started\\\\n2025-11-26T13:15:37Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:16:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.467764 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.482287 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.486787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.486845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.486862 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.486888 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.486906 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:38Z","lastTransitionTime":"2025-11-26T13:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.499341 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.513679 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.533162 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.550621 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.565612 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a18a65ca-3552-42bc-84e4-e89c5c35bc1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce4876cd230438c597828edab632e809c77ee13d7e9bb226953e86c84043555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.582944 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.590279 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.590331 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.590342 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.590364 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.590375 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:38Z","lastTransitionTime":"2025-11-26T13:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.616408 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11c7053c826b4a81d7d57db326d5a5cdb7fd3f84e5b08b33a664d63b9c7ec22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"message\\\":\\\"openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc004647f7b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.4.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1126 13:16:05.866396 6398 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:37Z\\\",\\\"message\\\":\\\"troller\\\\nI1126 13:16:37.787889 6812 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.NetworkPolicy\\\\nI1126 13:16:37.787811 6812 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI1126 13:16:37.787955 6812 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1126 13:16:37.788107 6812 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:16:37.788144 6812 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:16:37.788168 6812 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:16:37.788240 6812 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:16:37.788257 6812 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:16:37.788278 6812 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:16:37.788287 6812 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:16:37.788317 6812 factory.go:656] Stopping watch factory\\\\nI1126 13:16:37.788322 6812 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:16:37.788334 6812 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:16:37.788336 6812 ovnkube.go:599] Stopped ovnkube\\\\nI1126 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.634785 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.649454 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.663228 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.676334 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.693927 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.693990 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.694009 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.694036 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.694091 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:38Z","lastTransitionTime":"2025-11-26T13:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.699834 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:38Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.797404 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.797457 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:38 crc kubenswrapper[4747]: E1126 13:16:38.797618 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.797650 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.797705 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.797724 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.797746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.797765 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:38Z","lastTransitionTime":"2025-11-26T13:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:38 crc kubenswrapper[4747]: E1126 13:16:38.797826 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.900779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.900841 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.900858 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.900883 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:38 crc kubenswrapper[4747]: I1126 13:16:38.900901 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:38Z","lastTransitionTime":"2025-11-26T13:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.003289 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.003325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.003336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.003351 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.003360 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:39Z","lastTransitionTime":"2025-11-26T13:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.010813 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.106436 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.106508 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.106532 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.106634 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.106665 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:39Z","lastTransitionTime":"2025-11-26T13:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.209563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.209631 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.209649 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.209678 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.209697 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:39Z","lastTransitionTime":"2025-11-26T13:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.313787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.313849 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.313868 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.313893 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.313915 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:39Z","lastTransitionTime":"2025-11-26T13:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.337771 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/3.log" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.343739 4747 scope.go:117] "RemoveContainer" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:16:39 crc kubenswrapper[4747]: E1126 13:16:39.343989 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.361630 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a18a65ca-3552-42bc-84e4-e89c5c35bc1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce4876cd230438c597828edab632e809c77ee13d7e9bb226953e86c84043555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.382830 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.415504 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:37Z\\\",\\\"message\\\":\\\"troller\\\\nI1126 13:16:37.787889 6812 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.NetworkPolicy\\\\nI1126 13:16:37.787811 6812 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI1126 13:16:37.787955 6812 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1126 13:16:37.788107 6812 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:16:37.788144 6812 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:16:37.788168 6812 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:16:37.788240 6812 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:16:37.788257 6812 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:16:37.788278 6812 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:16:37.788287 6812 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:16:37.788317 6812 factory.go:656] Stopping watch factory\\\\nI1126 13:16:37.788322 6812 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:16:37.788334 6812 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:16:37.788336 6812 ovnkube.go:599] Stopped ovnkube\\\\nI1126 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.418200 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.418294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.418317 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.418341 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.418396 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:39Z","lastTransitionTime":"2025-11-26T13:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.436450 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.457202 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.477227 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.499015 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.517854 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.521662 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.521726 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.521744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.521770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.521787 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:39Z","lastTransitionTime":"2025-11-26T13:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.541716 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.564704 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.586476 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.609302 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.625396 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.625497 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.625516 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.625581 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.625601 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:39Z","lastTransitionTime":"2025-11-26T13:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.632166 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:22Z\\\",\\\"message\\\":\\\"2025-11-26T13:15:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339\\\\n2025-11-26T13:15:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339 to /host/opt/cni/bin/\\\\n2025-11-26T13:15:37Z [verbose] multus-daemon started\\\\n2025-11-26T13:15:37Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:16:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.656542 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.677166 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.714270 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.728579 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.728642 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.728660 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.728684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.728702 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:39Z","lastTransitionTime":"2025-11-26T13:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.737949 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.757828 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.775353 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:39Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.797869 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.797979 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:39 crc kubenswrapper[4747]: E1126 13:16:39.798157 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:39 crc kubenswrapper[4747]: E1126 13:16:39.798234 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.831045 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.831443 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.831615 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.831803 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.831989 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:39Z","lastTransitionTime":"2025-11-26T13:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.935687 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.936166 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.936190 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.936221 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:39 crc kubenswrapper[4747]: I1126 13:16:39.936241 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:39Z","lastTransitionTime":"2025-11-26T13:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.061644 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.061711 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.061729 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.061754 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.061779 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:40Z","lastTransitionTime":"2025-11-26T13:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.164112 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.164174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.164194 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.164217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.164236 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:40Z","lastTransitionTime":"2025-11-26T13:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.266974 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.267028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.267050 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.267113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.267140 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:40Z","lastTransitionTime":"2025-11-26T13:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.370317 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.370434 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.370458 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.370488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.370513 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:40Z","lastTransitionTime":"2025-11-26T13:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.473977 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.474087 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.474104 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.474123 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.474135 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:40Z","lastTransitionTime":"2025-11-26T13:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.577723 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.577802 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.577820 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.577845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.577862 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:40Z","lastTransitionTime":"2025-11-26T13:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.680971 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.681086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.681107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.681131 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.681148 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:40Z","lastTransitionTime":"2025-11-26T13:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.783859 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.783934 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.783963 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.783996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.784019 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:40Z","lastTransitionTime":"2025-11-26T13:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.798173 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.798184 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:40 crc kubenswrapper[4747]: E1126 13:16:40.798342 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:40 crc kubenswrapper[4747]: E1126 13:16:40.798520 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.887218 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.887296 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.887314 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.887341 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.887361 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:40Z","lastTransitionTime":"2025-11-26T13:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.990106 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.990164 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.990185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.990211 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:40 crc kubenswrapper[4747]: I1126 13:16:40.990229 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:40Z","lastTransitionTime":"2025-11-26T13:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.092713 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.092787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.092805 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.092831 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.092849 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.195531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.195601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.195619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.195642 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.195659 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.298113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.298173 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.298191 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.298215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.298233 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.401023 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.401129 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.401150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.401178 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.401198 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.505358 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.505428 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.505444 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.505470 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.505489 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.544036 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.544132 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.544149 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.544177 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.544196 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: E1126 13:16:41.564819 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.569969 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.570107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.570132 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.570162 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.570184 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: E1126 13:16:41.589910 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.595457 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.595569 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.595594 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.595624 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.595644 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: E1126 13:16:41.615705 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.621001 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.621099 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.621113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.621142 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.621159 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: E1126 13:16:41.640098 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.645371 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.645440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.645458 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.645483 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.645502 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: E1126 13:16:41.665610 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"43405111-f666-4269-b245-6c0668a7ae21\\\",\\\"systemUUID\\\":\\\"06628e42-f6c2-406a-9cb1-13512d1e2a59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:41Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:41 crc kubenswrapper[4747]: E1126 13:16:41.665836 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.668283 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.668333 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.668351 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.668377 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.668396 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.771508 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.771590 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.771604 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.771631 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.771647 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.797885 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:41 crc kubenswrapper[4747]: E1126 13:16:41.798044 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.798179 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:41 crc kubenswrapper[4747]: E1126 13:16:41.798594 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.874439 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.874510 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.874529 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.874556 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.874574 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.978425 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.978495 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.978514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.978549 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:41 crc kubenswrapper[4747]: I1126 13:16:41.978568 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:41Z","lastTransitionTime":"2025-11-26T13:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.081223 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.081286 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.081303 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.081330 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.081351 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:42Z","lastTransitionTime":"2025-11-26T13:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.184028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.184135 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.184154 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.184184 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.184241 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:42Z","lastTransitionTime":"2025-11-26T13:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.289018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.289126 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.289144 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.289172 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.289192 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:42Z","lastTransitionTime":"2025-11-26T13:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.392163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.392235 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.392252 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.392280 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.392300 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:42Z","lastTransitionTime":"2025-11-26T13:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.495885 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.495958 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.495977 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.496007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.496027 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:42Z","lastTransitionTime":"2025-11-26T13:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.599723 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.599786 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.599799 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.599824 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.599843 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:42Z","lastTransitionTime":"2025-11-26T13:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.703283 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.703374 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.703396 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.703426 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.703447 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:42Z","lastTransitionTime":"2025-11-26T13:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.797783 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.797856 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:42 crc kubenswrapper[4747]: E1126 13:16:42.798004 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:42 crc kubenswrapper[4747]: E1126 13:16:42.798518 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.807558 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.807623 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.807643 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.807680 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.807701 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:42Z","lastTransitionTime":"2025-11-26T13:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.910524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.910598 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.910613 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.910635 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:42 crc kubenswrapper[4747]: I1126 13:16:42.910653 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:42Z","lastTransitionTime":"2025-11-26T13:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.014477 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.014572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.014593 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.014625 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.014646 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:43Z","lastTransitionTime":"2025-11-26T13:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.118168 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.118246 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.118266 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.118298 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.118324 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:43Z","lastTransitionTime":"2025-11-26T13:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.221255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.221352 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.221369 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.221401 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.221423 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:43Z","lastTransitionTime":"2025-11-26T13:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.324917 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.324980 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.324998 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.325025 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.325043 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:43Z","lastTransitionTime":"2025-11-26T13:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.428542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.428658 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.428681 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.428718 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.428740 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:43Z","lastTransitionTime":"2025-11-26T13:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.532475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.532533 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.532554 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.532582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.532604 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:43Z","lastTransitionTime":"2025-11-26T13:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.635976 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.636091 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.636120 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.636152 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.636176 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:43Z","lastTransitionTime":"2025-11-26T13:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.738907 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.738970 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.738988 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.739017 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.739036 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:43Z","lastTransitionTime":"2025-11-26T13:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.797761 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.797788 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:43 crc kubenswrapper[4747]: E1126 13:16:43.797970 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:43 crc kubenswrapper[4747]: E1126 13:16:43.798191 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.822736 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08e5ef65-2ccb-4be1-a6eb-26b5031353e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9044b8cf423e7a775959b56cd8b0824a24453a3cc3d170e0299849b778817cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ed7be06d94549f83eca59ef1d2aabbaf74bdadc81c977462cc96f319c66330\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97ddd380bf7ff69f1359ff289a97cb23107c9321f8e1625a9b3a08fe264e670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.842517 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.842563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.842579 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.842608 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.842626 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:43Z","lastTransitionTime":"2025-11-26T13:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.844662 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fca186-f8ef-442a-8713-1d19a1bdc8f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b8dec14f4b0482396f3663fe367900943cf2fbf948b89b4529db69f7bd8f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6711ffa76534b95bab7be80ac761bb0834cfc58daa6596775ba946ed81f91f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a749b37e5f3b897c62a211c37abc2096da02d4133abda0d92a060eec0e38740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e5918a15585325ceb5d35756041c5cbe0d58f86d83d1e7399f232e17fa591b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.862571 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a18a65ca-3552-42bc-84e4-e89c5c35bc1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce4876cd230438c597828edab632e809c77ee13d7e9bb226953e86c84043555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f61384296c02eb9387cc0759cdc923ce969e996e8a9104d3d57cf2cdbf4f509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.885505 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.918851 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59482207-ba7e-4b71-a40b-968d8e3dcb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:37Z\\\",\\\"message\\\":\\\"troller\\\\nI1126 13:16:37.787889 6812 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.NetworkPolicy\\\\nI1126 13:16:37.787811 6812 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI1126 13:16:37.787955 6812 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI1126 13:16:37.788107 6812 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:16:37.788144 6812 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:16:37.788168 6812 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 13:16:37.788240 6812 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:16:37.788257 6812 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:16:37.788278 6812 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:16:37.788287 6812 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:16:37.788317 6812 factory.go:656] Stopping watch factory\\\\nI1126 13:16:37.788322 6812 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:16:37.788334 6812 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:16:37.788336 6812 ovnkube.go:599] Stopped ovnkube\\\\nI1126 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:16:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2cm5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4wml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.940165 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.945025 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.945207 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.945230 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.945256 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.945277 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:43Z","lastTransitionTime":"2025-11-26T13:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.959769 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.978650 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021e3b3-27be-4500-8dae-e5cd31ba8405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6494c71e35d11ebb3d03f34413115f3b478791b1b51415f2ff912f8fa00bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjc55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:43 crc kubenswrapper[4747]: I1126 13:16:43.994604 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p296l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f152815-d3e9-4250-9427-94f851c10579\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b59ab63220b325e6286ab90b946d4de2ba1ffc85d0796e9fcb96fdd919077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npngj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p296l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.009861 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6mph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f5069d-8915-40b7-b10d-59ed2d50516c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bbb0a436dbc481e27dd49e4b2fdcfad880e831f5759977d683c13bd899d862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttft9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6mph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.034460 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75p22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"405692d3-ec7c-4ebe-8d8f-d89f0de8a62a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1bb20a07e1f2d5bc499c89190ca15a882b5b6aa87595b0e9009f837cb2a958c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b915baa369ca77dd8beac323bb2e2abe9a69a2ffeea6a1984355945413f649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fcbf65faafbfc04b2fd0e77c1c3d4a3f87fd544ba79abb8e6ff45ff6ab4104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dda5e0e7f0c3dbb993732a561a929318707ddc902436443a36bfabdecb778b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae971ea93ca37a4f032d2f1801ce0f002842d7fe13aa81c05ac16a80c5a18bd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f4ca26052f9515252f5e0699ccf81772b3ba7a600685a4d931f8706453bd9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83caedccad1b9f2f50589419aa0c2f7f96a047fc99959ed3b8a09bee029a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj4z5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75p22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.048255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.048311 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.048328 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.048354 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.048373 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:44Z","lastTransitionTime":"2025-11-26T13:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.055168 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e5faae-7d45-4ac5-8dfc-b881dfb4c9ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d4cc674bc992e174c02be2c3ee001c1366283d033567007fb70e051da88b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7d7600ab2cee7f7ac52c301addbf248a2e6975f0bf356286c2a3ed3fb13b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxtwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.088470 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c5b07eb-0d42-4b5f-9b10-69958bccfa1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf10d8fbe0884b0619bea520c213f85f59d1a7329d3b7d0e155e591bacbf9c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e851d50bc1c11d4147033562c012c82268d4083d23518a17a2898c08edba6da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925d988871a6ec39571ef1b3aaad91fd370d02b4aa933f7fee24fdecd1ed3aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a63aa8456d368eedeaf3c9049fc3dfe1db454cf1b8db4826716c87f97da6b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336463b9e7836a38baaf03d0d1aa8566f8633fc1443338aa572b0c8ba941e24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d663308090d636f0069818bf94cdb5296883fe5379ff5ccec8e99d5ebb5b17a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afbb3850b699f5af2d9a9de7ae7541cc8906ab039f5395fa3bbea143c5bee114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea3b9fd23aa0231fbe8f5143f41937faea3334e159877fe91d30fac210c02fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.111204 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722a046a-0d41-469c-ac7d-f58624c825aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:15:27.470633 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:15:27.473789 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2349626164/tls.crt::/tmp/serving-cert-2349626164/tls.key\\\\\\\"\\\\nI1126 13:15:33.385294 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:15:33.389476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:15:33.389513 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:15:33.389550 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:15:33.389562 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 13:15:33.402708 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 13:15:33.402742 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 13:15:33.402762 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 13:15:33.402770 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 13:15:33.402779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 13:15:33.402785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1126 13:15:33.403027 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1126 13:15:33.404826 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.128901 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://351ae44250214d182638e53fa0284293ae1b6aeb7d553f714a7e5c5bebff2a63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.151749 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b699909c13925db1ae5b92debd54ee1d515ae78bf20e5e7ce646339839316193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfaa39097f960cfecab7eb04321f2c3898691c4186a894dc9b13c900b6ca3f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.151875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.151969 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.151997 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.152036 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.152133 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:44Z","lastTransitionTime":"2025-11-26T13:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.174710 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lb7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:16:22Z\\\",\\\"message\\\":\\\"2025-11-26T13:15:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339\\\\n2025-11-26T13:15:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9316a48f-203f-46ec-bd64-b6b2172b9339 to /host/opt/cni/bin/\\\\n2025-11-26T13:15:37Z [verbose] multus-daemon started\\\\n2025-11-26T13:15:37Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:16:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:15:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scb6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lb7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.198826 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22348ef3bb38cdecd53876dd49fe4a73f3f9d798eea1386b1553d6cfd31b6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.213678 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67391449-89bb-423a-b690-2f60a43ccfad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5sgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6zzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:16:44Z is after 2025-08-24T17:21:41Z" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.255620 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.255696 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.255716 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.255746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.255768 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:44Z","lastTransitionTime":"2025-11-26T13:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.359456 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.359519 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.359540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.359571 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.359591 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:44Z","lastTransitionTime":"2025-11-26T13:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.462452 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.462496 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.462509 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.462526 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.462538 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:44Z","lastTransitionTime":"2025-11-26T13:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.566128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.566192 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.566209 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.566234 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.566253 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:44Z","lastTransitionTime":"2025-11-26T13:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.669582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.669654 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.669676 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.669708 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.669731 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:44Z","lastTransitionTime":"2025-11-26T13:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.772915 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.772968 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.772984 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.773007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.773023 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:44Z","lastTransitionTime":"2025-11-26T13:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.798216 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.798231 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:44 crc kubenswrapper[4747]: E1126 13:16:44.798444 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:44 crc kubenswrapper[4747]: E1126 13:16:44.798574 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.876952 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.876998 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.877014 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.877042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.877084 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:44Z","lastTransitionTime":"2025-11-26T13:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.979733 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.979791 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.979808 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.979832 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:44 crc kubenswrapper[4747]: I1126 13:16:44.979848 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:44Z","lastTransitionTime":"2025-11-26T13:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.083318 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.083391 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.083414 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.083444 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.083468 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:45Z","lastTransitionTime":"2025-11-26T13:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.186987 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.187048 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.187098 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.187124 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.187141 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:45Z","lastTransitionTime":"2025-11-26T13:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.290304 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.290360 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.290376 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.290399 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.290417 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:45Z","lastTransitionTime":"2025-11-26T13:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.393491 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.393830 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.394082 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.394245 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.394436 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:45Z","lastTransitionTime":"2025-11-26T13:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.498802 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.498887 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.498907 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.498942 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.498966 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:45Z","lastTransitionTime":"2025-11-26T13:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.602140 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.602525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.602777 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.602955 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.603125 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:45Z","lastTransitionTime":"2025-11-26T13:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.706297 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.706365 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.706383 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.706410 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.706428 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:45Z","lastTransitionTime":"2025-11-26T13:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.797962 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.798094 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:45 crc kubenswrapper[4747]: E1126 13:16:45.798597 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:45 crc kubenswrapper[4747]: E1126 13:16:45.799154 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.808860 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.808916 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.808929 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.808949 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.808969 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:45Z","lastTransitionTime":"2025-11-26T13:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.911752 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.912140 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.912336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.912524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:45 crc kubenswrapper[4747]: I1126 13:16:45.912740 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:45Z","lastTransitionTime":"2025-11-26T13:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.016325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.016396 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.016417 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.016445 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.016467 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:46Z","lastTransitionTime":"2025-11-26T13:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.120700 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.120818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.120884 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.120919 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.120942 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:46Z","lastTransitionTime":"2025-11-26T13:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.223958 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.224015 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.224031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.224101 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.224119 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:46Z","lastTransitionTime":"2025-11-26T13:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.326812 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.326863 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.326879 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.326901 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.326918 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:46Z","lastTransitionTime":"2025-11-26T13:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.430088 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.430143 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.430159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.430182 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.430196 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:46Z","lastTransitionTime":"2025-11-26T13:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.532864 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.532927 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.532943 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.532963 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.532983 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:46Z","lastTransitionTime":"2025-11-26T13:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.636469 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.636546 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.636568 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.636592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.636613 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:46Z","lastTransitionTime":"2025-11-26T13:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.739362 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.739427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.739448 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.739474 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.739493 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:46Z","lastTransitionTime":"2025-11-26T13:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.797607 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.797668 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:46 crc kubenswrapper[4747]: E1126 13:16:46.797829 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:46 crc kubenswrapper[4747]: E1126 13:16:46.798022 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.842742 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.843394 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.843431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.843464 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.843484 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:46Z","lastTransitionTime":"2025-11-26T13:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.946866 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.946935 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.946952 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.946978 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:46 crc kubenswrapper[4747]: I1126 13:16:46.946998 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:46Z","lastTransitionTime":"2025-11-26T13:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.050587 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.050675 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.050701 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.050734 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.050758 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:47Z","lastTransitionTime":"2025-11-26T13:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.153749 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.153791 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.153800 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.153816 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.153830 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:47Z","lastTransitionTime":"2025-11-26T13:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.256948 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.257007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.257028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.257097 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.257122 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:47Z","lastTransitionTime":"2025-11-26T13:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.360289 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.360377 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.360401 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.360814 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.361150 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:47Z","lastTransitionTime":"2025-11-26T13:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.464524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.464593 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.464628 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.464658 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.464679 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:47Z","lastTransitionTime":"2025-11-26T13:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.568175 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.568217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.568228 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.568245 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.568255 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:47Z","lastTransitionTime":"2025-11-26T13:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.671248 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.671314 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.671331 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.671355 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.671374 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:47Z","lastTransitionTime":"2025-11-26T13:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.773850 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.773913 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.773921 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.773935 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.773944 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:47Z","lastTransitionTime":"2025-11-26T13:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.797655 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.797707 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:47 crc kubenswrapper[4747]: E1126 13:16:47.798332 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:47 crc kubenswrapper[4747]: E1126 13:16:47.798508 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.877033 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.877090 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.877102 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.877118 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.877126 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:47Z","lastTransitionTime":"2025-11-26T13:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.979925 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.979984 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.980001 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.980027 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:47 crc kubenswrapper[4747]: I1126 13:16:47.980045 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:47Z","lastTransitionTime":"2025-11-26T13:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.084222 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.084289 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.084307 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.084329 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.084346 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:48Z","lastTransitionTime":"2025-11-26T13:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.186674 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.186735 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.186752 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.186774 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.186790 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:48Z","lastTransitionTime":"2025-11-26T13:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.290044 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.290129 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.290145 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.290166 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.290181 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:48Z","lastTransitionTime":"2025-11-26T13:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.392667 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.392734 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.392752 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.392775 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.392795 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:48Z","lastTransitionTime":"2025-11-26T13:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.496905 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.496961 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.496971 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.496990 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.497001 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:48Z","lastTransitionTime":"2025-11-26T13:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.600165 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.600216 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.600234 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.600257 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.600273 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:48Z","lastTransitionTime":"2025-11-26T13:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.703022 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.703079 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.703105 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.703125 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.703138 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:48Z","lastTransitionTime":"2025-11-26T13:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.798283 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.798294 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:48 crc kubenswrapper[4747]: E1126 13:16:48.798558 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:48 crc kubenswrapper[4747]: E1126 13:16:48.798804 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.807028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.807124 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.807143 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.807167 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.807183 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:48Z","lastTransitionTime":"2025-11-26T13:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.909921 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.909967 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.909976 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.909991 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:48 crc kubenswrapper[4747]: I1126 13:16:48.910000 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:48Z","lastTransitionTime":"2025-11-26T13:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.018108 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.018186 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.018207 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.018232 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.018250 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:49Z","lastTransitionTime":"2025-11-26T13:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.120800 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.120861 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.120882 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.120905 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.120923 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:49Z","lastTransitionTime":"2025-11-26T13:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.223233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.223722 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.223881 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.224081 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.224275 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:49Z","lastTransitionTime":"2025-11-26T13:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.327624 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.327936 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.328164 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.328283 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.328387 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:49Z","lastTransitionTime":"2025-11-26T13:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.431011 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.431111 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.431138 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.431164 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.431185 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:49Z","lastTransitionTime":"2025-11-26T13:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.533863 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.534280 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.534467 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.534659 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.534837 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:49Z","lastTransitionTime":"2025-11-26T13:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.639302 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.639396 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.639413 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.639440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.639457 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:49Z","lastTransitionTime":"2025-11-26T13:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.743417 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.743483 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.743495 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.743521 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.743536 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:49Z","lastTransitionTime":"2025-11-26T13:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.797741 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.797842 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:49 crc kubenswrapper[4747]: E1126 13:16:49.797868 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:49 crc kubenswrapper[4747]: E1126 13:16:49.798021 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.846557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.846605 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.846618 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.846637 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.846650 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:49Z","lastTransitionTime":"2025-11-26T13:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.949970 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.950042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.950122 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.950155 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:49 crc kubenswrapper[4747]: I1126 13:16:49.950178 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:49Z","lastTransitionTime":"2025-11-26T13:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.053038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.053134 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.053152 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.053177 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.053196 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:50Z","lastTransitionTime":"2025-11-26T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.156381 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.156425 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.156436 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.156481 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.156493 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:50Z","lastTransitionTime":"2025-11-26T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.259455 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.259503 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.259521 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.259540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.259558 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:50Z","lastTransitionTime":"2025-11-26T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.363120 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.363230 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.363250 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.363280 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.363303 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:50Z","lastTransitionTime":"2025-11-26T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.466419 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.466486 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.466509 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.466540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.466565 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:50Z","lastTransitionTime":"2025-11-26T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.569171 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.569226 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.569243 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.569265 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.569281 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:50Z","lastTransitionTime":"2025-11-26T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.672020 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.672105 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.672119 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.672139 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.672152 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:50Z","lastTransitionTime":"2025-11-26T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.775903 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.775974 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.775996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.776027 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.776049 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:50Z","lastTransitionTime":"2025-11-26T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.797990 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:50 crc kubenswrapper[4747]: E1126 13:16:50.798176 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.798001 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:50 crc kubenswrapper[4747]: E1126 13:16:50.798378 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.878841 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.878916 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.878942 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.878974 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.879003 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:50Z","lastTransitionTime":"2025-11-26T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.982011 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.982101 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.982125 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.982152 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:50 crc kubenswrapper[4747]: I1126 13:16:50.982173 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:50Z","lastTransitionTime":"2025-11-26T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.085014 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.085139 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.085163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.085195 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.085217 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:51Z","lastTransitionTime":"2025-11-26T13:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.187992 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.188047 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.188091 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.188114 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.188132 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:51Z","lastTransitionTime":"2025-11-26T13:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.291397 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.291467 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.291488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.291515 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.291539 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:51Z","lastTransitionTime":"2025-11-26T13:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.393755 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.393818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.393838 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.393863 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.393880 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:51Z","lastTransitionTime":"2025-11-26T13:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.497410 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.497472 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.497488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.497513 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.497536 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:51Z","lastTransitionTime":"2025-11-26T13:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.599401 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.599446 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.599457 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.599475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.599487 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:51Z","lastTransitionTime":"2025-11-26T13:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.701871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.701919 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.701935 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.701960 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.701981 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:51Z","lastTransitionTime":"2025-11-26T13:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.797886 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.798088 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:51 crc kubenswrapper[4747]: E1126 13:16:51.798248 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:51 crc kubenswrapper[4747]: E1126 13:16:51.798387 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.810744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.810821 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.810861 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.810894 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.810921 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:51Z","lastTransitionTime":"2025-11-26T13:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.860724 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.860802 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.860825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.860857 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.860878 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:16:51Z","lastTransitionTime":"2025-11-26T13:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.924952 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2"] Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.926553 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.928963 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.929198 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.929373 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.929509 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.973500 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-75p22" podStartSLOduration=77.973372026 podStartE2EDuration="1m17.973372026s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:51.95467877 +0000 UTC m=+98.940989785" watchObservedRunningTime="2025-11-26 13:16:51.973372026 +0000 UTC m=+98.959683041" Nov 26 13:16:51 crc kubenswrapper[4747]: I1126 13:16:51.973820 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxtwd" podStartSLOduration=76.973788466 podStartE2EDuration="1m16.973788466s" podCreationTimestamp="2025-11-26 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:51.973613722 +0000 UTC m=+98.959924747" watchObservedRunningTime="2025-11-26 13:16:51.973788466 +0000 UTC m=+98.960099471" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.002561 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88bcc55f-53d5-445a-be0a-efee4176e9b6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.002668 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/88bcc55f-53d5-445a-be0a-efee4176e9b6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.002736 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88bcc55f-53d5-445a-be0a-efee4176e9b6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.003016 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/88bcc55f-53d5-445a-be0a-efee4176e9b6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.003144 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88bcc55f-53d5-445a-be0a-efee4176e9b6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.033154 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.033118055 podStartE2EDuration="1m18.033118055s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:52.032729605 +0000 UTC m=+99.019040640" watchObservedRunningTime="2025-11-26 13:16:52.033118055 +0000 UTC m=+99.019429110" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.055002 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.05497438 podStartE2EDuration="1m19.05497438s" podCreationTimestamp="2025-11-26 13:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:52.053933094 +0000 UTC m=+99.040244109" watchObservedRunningTime="2025-11-26 13:16:52.05497438 +0000 UTC m=+99.041285405" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.104543 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/88bcc55f-53d5-445a-be0a-efee4176e9b6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.104595 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88bcc55f-53d5-445a-be0a-efee4176e9b6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.104622 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88bcc55f-53d5-445a-be0a-efee4176e9b6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.104649 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/88bcc55f-53d5-445a-be0a-efee4176e9b6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.104663 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88bcc55f-53d5-445a-be0a-efee4176e9b6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.105674 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/88bcc55f-53d5-445a-be0a-efee4176e9b6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.105808 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/88bcc55f-53d5-445a-be0a-efee4176e9b6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.106630 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88bcc55f-53d5-445a-be0a-efee4176e9b6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.116951 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lb7jc" podStartSLOduration=78.116920604 podStartE2EDuration="1m18.116920604s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:52.116349519 +0000 UTC m=+99.102660534" watchObservedRunningTime="2025-11-26 13:16:52.116920604 +0000 UTC m=+99.103231649" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.120079 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88bcc55f-53d5-445a-be0a-efee4176e9b6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.153277 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88bcc55f-53d5-445a-be0a-efee4176e9b6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ccxz2\" (UID: \"88bcc55f-53d5-445a-be0a-efee4176e9b6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.193895 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.193870442 podStartE2EDuration="1m18.193870442s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:52.193179574 +0000 UTC m=+99.179490579" watchObservedRunningTime="2025-11-26 13:16:52.193870442 +0000 UTC m=+99.180181477" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.217883 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=47.21786463 podStartE2EDuration="47.21786463s" podCreationTimestamp="2025-11-26 13:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:52.205159643 +0000 UTC m=+99.191470648" watchObservedRunningTime="2025-11-26 13:16:52.21786463 +0000 UTC m=+99.204175645" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.228969 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=33.228947006 podStartE2EDuration="33.228947006s" podCreationTimestamp="2025-11-26 13:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:52.218088355 +0000 UTC m=+99.204399370" watchObservedRunningTime="2025-11-26 13:16:52.228947006 +0000 UTC m=+99.215258021" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.243264 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" Nov 26 13:16:52 crc kubenswrapper[4747]: W1126 13:16:52.268986 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88bcc55f_53d5_445a_be0a_efee4176e9b6.slice/crio-9545825aa6bbea589edcbdad4bfc0655136235974ec289936baca54a7b7aeef3 WatchSource:0}: Error finding container 9545825aa6bbea589edcbdad4bfc0655136235974ec289936baca54a7b7aeef3: Status 404 returned error can't find the container with id 9545825aa6bbea589edcbdad4bfc0655136235974ec289936baca54a7b7aeef3 Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.322521 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podStartSLOduration=78.322500888 podStartE2EDuration="1m18.322500888s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:52.322467737 +0000 UTC m=+99.308778752" watchObservedRunningTime="2025-11-26 13:16:52.322500888 +0000 UTC m=+99.308811903" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.336446 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p296l" podStartSLOduration=78.336425175 podStartE2EDuration="1m18.336425175s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:52.335122672 +0000 UTC m=+99.321433707" watchObservedRunningTime="2025-11-26 13:16:52.336425175 +0000 UTC m=+99.322736200" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.355430 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-t6mph" podStartSLOduration=78.355406118 podStartE2EDuration="1m18.355406118s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:52.354726661 +0000 UTC m=+99.341037676" watchObservedRunningTime="2025-11-26 13:16:52.355406118 +0000 UTC m=+99.341717163" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.394415 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" event={"ID":"88bcc55f-53d5-445a-be0a-efee4176e9b6","Type":"ContainerStarted","Data":"de6547b9f58b2e4bcf06eb2683076d3adf2903586588b75907db9042a688b973"} Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.394487 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" event={"ID":"88bcc55f-53d5-445a-be0a-efee4176e9b6","Type":"ContainerStarted","Data":"9545825aa6bbea589edcbdad4bfc0655136235974ec289936baca54a7b7aeef3"} Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.797944 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.798031 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:52 crc kubenswrapper[4747]: E1126 13:16:52.798588 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:52 crc kubenswrapper[4747]: E1126 13:16:52.798773 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:52 crc kubenswrapper[4747]: I1126 13:16:52.799110 4747 scope.go:117] "RemoveContainer" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:16:52 crc kubenswrapper[4747]: E1126 13:16:52.799362 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" Nov 26 13:16:53 crc kubenswrapper[4747]: I1126 13:16:53.418391 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:53 crc kubenswrapper[4747]: E1126 13:16:53.418624 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:16:53 crc kubenswrapper[4747]: E1126 13:16:53.418712 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs podName:67391449-89bb-423a-b690-2f60a43ccfad nodeName:}" failed. No retries permitted until 2025-11-26 13:17:57.418688098 +0000 UTC m=+164.404999143 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs") pod "network-metrics-daemon-6zzh7" (UID: "67391449-89bb-423a-b690-2f60a43ccfad") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:16:53 crc kubenswrapper[4747]: I1126 13:16:53.798273 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:53 crc kubenswrapper[4747]: I1126 13:16:53.798348 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:53 crc kubenswrapper[4747]: E1126 13:16:53.799574 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:53 crc kubenswrapper[4747]: E1126 13:16:53.799823 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:54 crc kubenswrapper[4747]: I1126 13:16:54.797799 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:54 crc kubenswrapper[4747]: I1126 13:16:54.797852 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:54 crc kubenswrapper[4747]: E1126 13:16:54.798023 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:54 crc kubenswrapper[4747]: E1126 13:16:54.798196 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:55 crc kubenswrapper[4747]: I1126 13:16:55.797669 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:55 crc kubenswrapper[4747]: I1126 13:16:55.797825 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:55 crc kubenswrapper[4747]: E1126 13:16:55.798352 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:55 crc kubenswrapper[4747]: E1126 13:16:55.798629 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:56 crc kubenswrapper[4747]: I1126 13:16:56.797976 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:56 crc kubenswrapper[4747]: E1126 13:16:56.798239 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:56 crc kubenswrapper[4747]: I1126 13:16:56.798445 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:56 crc kubenswrapper[4747]: E1126 13:16:56.798687 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:57 crc kubenswrapper[4747]: I1126 13:16:57.799384 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:57 crc kubenswrapper[4747]: I1126 13:16:57.799485 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:57 crc kubenswrapper[4747]: E1126 13:16:57.799578 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:57 crc kubenswrapper[4747]: E1126 13:16:57.799663 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:16:58 crc kubenswrapper[4747]: I1126 13:16:58.798242 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:16:58 crc kubenswrapper[4747]: E1126 13:16:58.798427 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:16:58 crc kubenswrapper[4747]: I1126 13:16:58.798800 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:16:58 crc kubenswrapper[4747]: E1126 13:16:58.799097 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:16:59 crc kubenswrapper[4747]: I1126 13:16:59.797262 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:16:59 crc kubenswrapper[4747]: E1126 13:16:59.797445 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:16:59 crc kubenswrapper[4747]: I1126 13:16:59.797755 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:16:59 crc kubenswrapper[4747]: E1126 13:16:59.797852 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:00 crc kubenswrapper[4747]: I1126 13:17:00.798237 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:00 crc kubenswrapper[4747]: I1126 13:17:00.798323 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:00 crc kubenswrapper[4747]: E1126 13:17:00.798416 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:00 crc kubenswrapper[4747]: E1126 13:17:00.798495 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:01 crc kubenswrapper[4747]: I1126 13:17:01.797907 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:01 crc kubenswrapper[4747]: I1126 13:17:01.797953 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:01 crc kubenswrapper[4747]: E1126 13:17:01.798169 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:01 crc kubenswrapper[4747]: E1126 13:17:01.798362 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:02 crc kubenswrapper[4747]: I1126 13:17:02.797761 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:02 crc kubenswrapper[4747]: E1126 13:17:02.797913 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:02 crc kubenswrapper[4747]: I1126 13:17:02.797761 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:02 crc kubenswrapper[4747]: E1126 13:17:02.798332 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:03 crc kubenswrapper[4747]: I1126 13:17:03.798193 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:03 crc kubenswrapper[4747]: I1126 13:17:03.800481 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:03 crc kubenswrapper[4747]: E1126 13:17:03.800702 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:03 crc kubenswrapper[4747]: E1126 13:17:03.801035 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:04 crc kubenswrapper[4747]: I1126 13:17:04.797748 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:04 crc kubenswrapper[4747]: I1126 13:17:04.797747 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:04 crc kubenswrapper[4747]: E1126 13:17:04.797991 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:04 crc kubenswrapper[4747]: E1126 13:17:04.798048 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:05 crc kubenswrapper[4747]: I1126 13:17:05.798206 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:05 crc kubenswrapper[4747]: I1126 13:17:05.798216 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:05 crc kubenswrapper[4747]: E1126 13:17:05.798751 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:05 crc kubenswrapper[4747]: E1126 13:17:05.798924 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:06 crc kubenswrapper[4747]: I1126 13:17:06.797596 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:06 crc kubenswrapper[4747]: I1126 13:17:06.797711 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:06 crc kubenswrapper[4747]: E1126 13:17:06.797772 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:06 crc kubenswrapper[4747]: E1126 13:17:06.798016 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:07 crc kubenswrapper[4747]: I1126 13:17:07.798010 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:07 crc kubenswrapper[4747]: I1126 13:17:07.798034 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:07 crc kubenswrapper[4747]: E1126 13:17:07.798265 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:07 crc kubenswrapper[4747]: E1126 13:17:07.798378 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:07 crc kubenswrapper[4747]: I1126 13:17:07.799521 4747 scope.go:117] "RemoveContainer" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:17:07 crc kubenswrapper[4747]: E1126 13:17:07.799782 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m4wml_openshift-ovn-kubernetes(59482207-ba7e-4b71-a40b-968d8e3dcb8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" Nov 26 13:17:08 crc kubenswrapper[4747]: I1126 13:17:08.797690 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:08 crc kubenswrapper[4747]: E1126 13:17:08.797908 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:08 crc kubenswrapper[4747]: I1126 13:17:08.798374 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:08 crc kubenswrapper[4747]: E1126 13:17:08.798534 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:09 crc kubenswrapper[4747]: I1126 13:17:09.464269 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/1.log" Nov 26 13:17:09 crc kubenswrapper[4747]: I1126 13:17:09.464976 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/0.log" Nov 26 13:17:09 crc kubenswrapper[4747]: I1126 13:17:09.465090 4747 generic.go:334] "Generic (PLEG): container finished" podID="aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1" containerID="a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e" exitCode=1 Nov 26 13:17:09 crc kubenswrapper[4747]: I1126 13:17:09.465143 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lb7jc" event={"ID":"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1","Type":"ContainerDied","Data":"a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e"} Nov 26 13:17:09 crc kubenswrapper[4747]: I1126 13:17:09.465200 4747 scope.go:117] "RemoveContainer" containerID="eccbca6eba01a47adc5adef5ba80ae3570ab643381f1d4c83fd094033af35e4d" Nov 26 13:17:09 crc kubenswrapper[4747]: I1126 13:17:09.465795 4747 scope.go:117] "RemoveContainer" containerID="a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e" Nov 26 13:17:09 crc kubenswrapper[4747]: E1126 13:17:09.466250 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lb7jc_openshift-multus(aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1)\"" pod="openshift-multus/multus-lb7jc" podUID="aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1" Nov 26 13:17:09 crc kubenswrapper[4747]: I1126 13:17:09.492868 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccxz2" podStartSLOduration=95.492839032 podStartE2EDuration="1m35.492839032s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:16:52.405186068 +0000 UTC m=+99.391497113" watchObservedRunningTime="2025-11-26 13:17:09.492839032 +0000 UTC m=+116.479150097" Nov 26 13:17:09 crc kubenswrapper[4747]: I1126 13:17:09.797488 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:09 crc kubenswrapper[4747]: I1126 13:17:09.797576 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:09 crc kubenswrapper[4747]: E1126 13:17:09.797686 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:09 crc kubenswrapper[4747]: E1126 13:17:09.797890 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:10 crc kubenswrapper[4747]: I1126 13:17:10.470274 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/1.log" Nov 26 13:17:10 crc kubenswrapper[4747]: I1126 13:17:10.798251 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:10 crc kubenswrapper[4747]: I1126 13:17:10.798329 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:10 crc kubenswrapper[4747]: E1126 13:17:10.798504 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:10 crc kubenswrapper[4747]: E1126 13:17:10.798638 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:11 crc kubenswrapper[4747]: I1126 13:17:11.798523 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:11 crc kubenswrapper[4747]: I1126 13:17:11.798572 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:11 crc kubenswrapper[4747]: E1126 13:17:11.798718 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:11 crc kubenswrapper[4747]: E1126 13:17:11.798863 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:12 crc kubenswrapper[4747]: I1126 13:17:12.797666 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:12 crc kubenswrapper[4747]: E1126 13:17:12.797849 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:12 crc kubenswrapper[4747]: I1126 13:17:12.797919 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:12 crc kubenswrapper[4747]: E1126 13:17:12.798130 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:13 crc kubenswrapper[4747]: E1126 13:17:13.754471 4747 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 26 13:17:13 crc kubenswrapper[4747]: I1126 13:17:13.797609 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:13 crc kubenswrapper[4747]: I1126 13:17:13.797745 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:13 crc kubenswrapper[4747]: E1126 13:17:13.799621 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:13 crc kubenswrapper[4747]: E1126 13:17:13.799875 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:13 crc kubenswrapper[4747]: E1126 13:17:13.899596 4747 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 13:17:14 crc kubenswrapper[4747]: I1126 13:17:14.797967 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:14 crc kubenswrapper[4747]: I1126 13:17:14.797967 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:14 crc kubenswrapper[4747]: E1126 13:17:14.798155 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:14 crc kubenswrapper[4747]: E1126 13:17:14.798227 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:15 crc kubenswrapper[4747]: I1126 13:17:15.798199 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:15 crc kubenswrapper[4747]: E1126 13:17:15.798395 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:15 crc kubenswrapper[4747]: I1126 13:17:15.798235 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:15 crc kubenswrapper[4747]: E1126 13:17:15.798772 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:16 crc kubenswrapper[4747]: I1126 13:17:16.797723 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:16 crc kubenswrapper[4747]: I1126 13:17:16.797723 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:16 crc kubenswrapper[4747]: E1126 13:17:16.797909 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:16 crc kubenswrapper[4747]: E1126 13:17:16.798147 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:17 crc kubenswrapper[4747]: I1126 13:17:17.797863 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:17 crc kubenswrapper[4747]: I1126 13:17:17.797996 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:17 crc kubenswrapper[4747]: E1126 13:17:17.798240 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:17 crc kubenswrapper[4747]: E1126 13:17:17.798493 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:18 crc kubenswrapper[4747]: I1126 13:17:18.797469 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:18 crc kubenswrapper[4747]: I1126 13:17:18.797583 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:18 crc kubenswrapper[4747]: E1126 13:17:18.797667 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:18 crc kubenswrapper[4747]: E1126 13:17:18.797765 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:18 crc kubenswrapper[4747]: E1126 13:17:18.901825 4747 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 13:17:19 crc kubenswrapper[4747]: I1126 13:17:19.797725 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:19 crc kubenswrapper[4747]: E1126 13:17:19.797982 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:19 crc kubenswrapper[4747]: I1126 13:17:19.798038 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:19 crc kubenswrapper[4747]: E1126 13:17:19.798262 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:19 crc kubenswrapper[4747]: I1126 13:17:19.798848 4747 scope.go:117] "RemoveContainer" containerID="a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e" Nov 26 13:17:20 crc kubenswrapper[4747]: I1126 13:17:20.509656 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/1.log" Nov 26 13:17:20 crc kubenswrapper[4747]: I1126 13:17:20.510031 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lb7jc" event={"ID":"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1","Type":"ContainerStarted","Data":"c225843c709935a5fa59c02609d44f595f192a71576db8fbbce3fb388e1f2d39"} Nov 26 13:17:20 crc kubenswrapper[4747]: I1126 13:17:20.797642 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:20 crc kubenswrapper[4747]: I1126 13:17:20.797692 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:20 crc kubenswrapper[4747]: E1126 13:17:20.797767 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:20 crc kubenswrapper[4747]: E1126 13:17:20.797874 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:21 crc kubenswrapper[4747]: I1126 13:17:21.797914 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:21 crc kubenswrapper[4747]: I1126 13:17:21.797923 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:21 crc kubenswrapper[4747]: E1126 13:17:21.798122 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:21 crc kubenswrapper[4747]: E1126 13:17:21.798770 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:21 crc kubenswrapper[4747]: I1126 13:17:21.799349 4747 scope.go:117] "RemoveContainer" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:17:22 crc kubenswrapper[4747]: I1126 13:17:22.519238 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/3.log" Nov 26 13:17:22 crc kubenswrapper[4747]: I1126 13:17:22.521517 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerStarted","Data":"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc"} Nov 26 13:17:22 crc kubenswrapper[4747]: I1126 13:17:22.521953 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:17:22 crc kubenswrapper[4747]: I1126 13:17:22.579807 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podStartSLOduration=108.579784347 podStartE2EDuration="1m48.579784347s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:22.578185877 +0000 UTC m=+129.564496992" watchObservedRunningTime="2025-11-26 13:17:22.579784347 +0000 UTC m=+129.566095382" Nov 26 13:17:22 crc kubenswrapper[4747]: I1126 13:17:22.798219 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:22 crc kubenswrapper[4747]: I1126 13:17:22.798238 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:22 crc kubenswrapper[4747]: E1126 13:17:22.798352 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:22 crc kubenswrapper[4747]: E1126 13:17:22.798446 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:22 crc kubenswrapper[4747]: I1126 13:17:22.984907 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6zzh7"] Nov 26 13:17:23 crc kubenswrapper[4747]: I1126 13:17:23.524991 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:23 crc kubenswrapper[4747]: E1126 13:17:23.525710 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:23 crc kubenswrapper[4747]: I1126 13:17:23.797875 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:23 crc kubenswrapper[4747]: I1126 13:17:23.797941 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:23 crc kubenswrapper[4747]: E1126 13:17:23.799894 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:23 crc kubenswrapper[4747]: E1126 13:17:23.800150 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:23 crc kubenswrapper[4747]: E1126 13:17:23.902684 4747 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 13:17:24 crc kubenswrapper[4747]: I1126 13:17:24.797565 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:24 crc kubenswrapper[4747]: I1126 13:17:24.797695 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:24 crc kubenswrapper[4747]: E1126 13:17:24.797798 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:24 crc kubenswrapper[4747]: E1126 13:17:24.797888 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:25 crc kubenswrapper[4747]: I1126 13:17:25.797977 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:25 crc kubenswrapper[4747]: E1126 13:17:25.798200 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:25 crc kubenswrapper[4747]: I1126 13:17:25.798657 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:25 crc kubenswrapper[4747]: E1126 13:17:25.798763 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:26 crc kubenswrapper[4747]: I1126 13:17:26.797286 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:26 crc kubenswrapper[4747]: I1126 13:17:26.797397 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:26 crc kubenswrapper[4747]: E1126 13:17:26.797819 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:26 crc kubenswrapper[4747]: E1126 13:17:26.798136 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:27 crc kubenswrapper[4747]: I1126 13:17:27.797814 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:27 crc kubenswrapper[4747]: I1126 13:17:27.797836 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:27 crc kubenswrapper[4747]: E1126 13:17:27.798014 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:17:27 crc kubenswrapper[4747]: E1126 13:17:27.798121 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:17:28 crc kubenswrapper[4747]: I1126 13:17:28.797655 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:28 crc kubenswrapper[4747]: I1126 13:17:28.797724 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:28 crc kubenswrapper[4747]: E1126 13:17:28.797865 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6zzh7" podUID="67391449-89bb-423a-b690-2f60a43ccfad" Nov 26 13:17:28 crc kubenswrapper[4747]: E1126 13:17:28.798033 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:17:29 crc kubenswrapper[4747]: I1126 13:17:29.798224 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:29 crc kubenswrapper[4747]: I1126 13:17:29.798615 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:29 crc kubenswrapper[4747]: I1126 13:17:29.801812 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 26 13:17:29 crc kubenswrapper[4747]: I1126 13:17:29.807447 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 26 13:17:29 crc kubenswrapper[4747]: I1126 13:17:29.807691 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 26 13:17:29 crc kubenswrapper[4747]: I1126 13:17:29.807836 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 26 13:17:30 crc kubenswrapper[4747]: I1126 13:17:30.798361 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:30 crc kubenswrapper[4747]: I1126 13:17:30.798385 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:30 crc kubenswrapper[4747]: I1126 13:17:30.802127 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 26 13:17:30 crc kubenswrapper[4747]: I1126 13:17:30.802342 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.297566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.351826 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pwnxf"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.352599 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.353782 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x8z2x"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.354831 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.355723 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pdp9j"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.356567 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.358189 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.358778 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.358973 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.359029 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.359084 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.359199 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.359041 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.359424 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.360598 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.374988 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.375171 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.375417 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.376131 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.376411 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.376473 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.380715 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.382086 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.383322 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.387909 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.388003 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.388315 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.388533 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.388814 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.388846 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.388932 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.389004 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.389138 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.389215 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.389343 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.389685 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.389759 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.389687 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.390115 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.390209 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.390239 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.390522 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.390563 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.400232 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.401362 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.415235 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.417277 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.417494 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.417536 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.417794 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.420236 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tv788"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.420486 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.420667 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tdmwg"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.421214 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.422207 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.422715 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.422868 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.423187 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.422724 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.424120 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mr6p4"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.424457 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.424586 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mr6p4" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.425406 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.425612 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.425662 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.426149 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.426528 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.427652 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ksg5q"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.428311 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.428484 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qvvvm"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.428868 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.429915 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r4nz4"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.430430 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.431283 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.431485 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.431650 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.431870 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.432099 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.432605 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.432746 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.432880 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.432983 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.433408 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.433489 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sddmq"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.433551 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.433657 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.434018 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.434160 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.434384 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6zcnr"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.434559 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.434805 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.435314 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.436657 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x8z2x"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.437822 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.438468 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.446593 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.456763 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.456805 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.457199 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.457320 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.457478 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.458119 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.458329 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.459422 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.459536 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.459558 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.459615 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.459767 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.459871 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.460074 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.460668 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.460941 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.461778 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.465033 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.466030 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.466281 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.466405 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.466441 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.466579 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.466602 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.466767 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.466794 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.466933 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.467016 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.467387 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.467742 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.468324 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.468478 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.466291 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.468630 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.468721 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.469552 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.469966 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.469971 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.470514 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.488931 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pdp9j"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.488975 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.489326 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.489715 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.489926 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.490359 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.494812 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/774857d6-50c3-4ada-96ab-430dbeff8b0f-encryption-config\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.494842 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/480f682a-63f7-4ef6-b10c-29c34222269b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.494863 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39864b7b-d0d0-4cdc-992d-6045872983cb-serving-cert\") pod \"openshift-config-operator-7777fb866f-jtfqg\" (UID: \"39864b7b-d0d0-4cdc-992d-6045872983cb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.494884 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z92qx\" (UniqueName: \"kubernetes.io/projected/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-kube-api-access-z92qx\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.494903 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd8pg\" (UniqueName: \"kubernetes.io/projected/774857d6-50c3-4ada-96ab-430dbeff8b0f-kube-api-access-kd8pg\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.494924 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-image-import-ca\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.494957 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480f682a-63f7-4ef6-b10c-29c34222269b-config\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.494981 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/774857d6-50c3-4ada-96ab-430dbeff8b0f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.494996 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/480f682a-63f7-4ef6-b10c-29c34222269b-images\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495010 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b8mg\" (UniqueName: \"kubernetes.io/projected/480f682a-63f7-4ef6-b10c-29c34222269b-kube-api-access-9b8mg\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495034 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-serving-cert\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495083 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-node-pullsecrets\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495099 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/774857d6-50c3-4ada-96ab-430dbeff8b0f-audit-dir\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495115 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-client-ca\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495130 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-config\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495144 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-audit-dir\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495164 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-etcd-client\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495179 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39864b7b-d0d0-4cdc-992d-6045872983cb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jtfqg\" (UID: \"39864b7b-d0d0-4cdc-992d-6045872983cb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495205 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdnwp\" (UniqueName: \"kubernetes.io/projected/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-kube-api-access-kdnwp\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495221 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495236 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e57bddb-aeda-470b-a095-ce0d84023e77-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ff9mx\" (UID: \"1e57bddb-aeda-470b-a095-ce0d84023e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495253 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-auth-proxy-config\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495268 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/774857d6-50c3-4ada-96ab-430dbeff8b0f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495283 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcdb5\" (UniqueName: \"kubernetes.io/projected/39864b7b-d0d0-4cdc-992d-6045872983cb-kube-api-access-lcdb5\") pod \"openshift-config-operator-7777fb866f-jtfqg\" (UID: \"39864b7b-d0d0-4cdc-992d-6045872983cb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495298 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-config\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495315 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-machine-approver-tls\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495331 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbn7\" (UniqueName: \"kubernetes.io/projected/1e57bddb-aeda-470b-a095-ce0d84023e77-kube-api-access-jxbn7\") pod \"cluster-samples-operator-665b6dd947-ff9mx\" (UID: \"1e57bddb-aeda-470b-a095-ce0d84023e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495347 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-etcd-serving-ca\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495362 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-encryption-config\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495378 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774857d6-50c3-4ada-96ab-430dbeff8b0f-serving-cert\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495394 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-audit\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495409 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-serving-cert\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495425 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-config\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495441 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78866\" (UniqueName: \"kubernetes.io/projected/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-kube-api-access-78866\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495457 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/774857d6-50c3-4ada-96ab-430dbeff8b0f-audit-policies\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495473 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/774857d6-50c3-4ada-96ab-430dbeff8b0f-etcd-client\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.495493 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.496980 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.497261 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.497360 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.497440 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.498323 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.499101 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x8d24"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.499687 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.499961 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.501389 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.501576 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.511604 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.514641 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.516815 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.516960 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.517933 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.522078 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.536068 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.536585 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.536838 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.537261 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.537456 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.537594 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.537837 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.538280 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.538283 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.551843 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.553030 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.554985 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.555592 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.556501 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.558967 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ctbq5"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.559599 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.559826 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.569747 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rpwnp"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.569927 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.570850 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.571485 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqvkf"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.571644 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.572062 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.578672 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.579260 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.579340 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.581171 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.581266 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.583634 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.583695 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4ll78"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.584995 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.586116 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.586136 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tv788"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.586145 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sddmq"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.586267 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.588279 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.588363 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6zcnr"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.588894 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mr6p4"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.589931 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ksg5q"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.590901 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.591854 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qvvvm"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.593570 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x8d24"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.594079 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.594796 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.595551 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pwnxf"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.595912 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-trusted-ca\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596007 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eae28a17-d4a1-4b7b-8aed-9588514fd6e6-profile-collector-cert\") pod \"catalog-operator-68c6474976-cq6xq\" (UID: \"eae28a17-d4a1-4b7b-8aed-9588514fd6e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596128 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b-metrics-tls\") pod \"dns-operator-744455d44c-r4nz4\" (UID: \"36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596220 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdnwp\" (UniqueName: \"kubernetes.io/projected/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-kube-api-access-kdnwp\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596293 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596364 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e57bddb-aeda-470b-a095-ce0d84023e77-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ff9mx\" (UID: \"1e57bddb-aeda-470b-a095-ce0d84023e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596441 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ec07ab0-a220-458e-9fde-76c2f9f8cbd7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6k6lm\" (UID: \"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596511 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-service-ca\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596010 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596588 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-auth-proxy-config\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596709 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/774857d6-50c3-4ada-96ab-430dbeff8b0f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596859 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c5734b55-b478-4525-b5da-88b63b4812d0-console-oauth-config\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.596931 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcdb5\" (UniqueName: \"kubernetes.io/projected/39864b7b-d0d0-4cdc-992d-6045872983cb-kube-api-access-lcdb5\") pod \"openshift-config-operator-7777fb866f-jtfqg\" (UID: \"39864b7b-d0d0-4cdc-992d-6045872983cb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597005 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4dz\" (UniqueName: \"kubernetes.io/projected/eae28a17-d4a1-4b7b-8aed-9588514fd6e6-kube-api-access-bj4dz\") pod \"catalog-operator-68c6474976-cq6xq\" (UID: \"eae28a17-d4a1-4b7b-8aed-9588514fd6e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597088 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-config\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597166 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-machine-approver-tls\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597233 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxbn7\" (UniqueName: \"kubernetes.io/projected/1e57bddb-aeda-470b-a095-ce0d84023e77-kube-api-access-jxbn7\") pod \"cluster-samples-operator-665b6dd947-ff9mx\" (UID: \"1e57bddb-aeda-470b-a095-ce0d84023e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597313 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-etcd-serving-ca\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597387 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-encryption-config\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597453 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774857d6-50c3-4ada-96ab-430dbeff8b0f-serving-cert\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597527 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-audit\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597600 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-serving-cert\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597678 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndlms\" (UniqueName: \"kubernetes.io/projected/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-kube-api-access-ndlms\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597750 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-config\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597824 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78866\" (UniqueName: \"kubernetes.io/projected/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-kube-api-access-78866\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597894 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/774857d6-50c3-4ada-96ab-430dbeff8b0f-audit-policies\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597964 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmgcj\" (UniqueName: \"kubernetes.io/projected/6ec07ab0-a220-458e-9fde-76c2f9f8cbd7-kube-api-access-jmgcj\") pod \"openshift-apiserver-operator-796bbdcf4f-6k6lm\" (UID: \"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598035 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eae28a17-d4a1-4b7b-8aed-9588514fd6e6-srv-cert\") pod \"catalog-operator-68c6474976-cq6xq\" (UID: \"eae28a17-d4a1-4b7b-8aed-9588514fd6e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598124 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-console-config\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598200 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec07ab0-a220-458e-9fde-76c2f9f8cbd7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6k6lm\" (UID: \"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598268 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/774857d6-50c3-4ada-96ab-430dbeff8b0f-etcd-client\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598335 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82daa056-c08b-4c56-817b-850b31cd016e-serving-cert\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598415 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598492 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/774857d6-50c3-4ada-96ab-430dbeff8b0f-encryption-config\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598565 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-serving-cert\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598639 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/480f682a-63f7-4ef6-b10c-29c34222269b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598710 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39864b7b-d0d0-4cdc-992d-6045872983cb-serving-cert\") pod \"openshift-config-operator-7777fb866f-jtfqg\" (UID: \"39864b7b-d0d0-4cdc-992d-6045872983cb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598781 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z92qx\" (UniqueName: \"kubernetes.io/projected/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-kube-api-access-z92qx\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598932 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd8pg\" (UniqueName: \"kubernetes.io/projected/774857d6-50c3-4ada-96ab-430dbeff8b0f-kube-api-access-kd8pg\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599003 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-image-import-ca\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599106 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a8d06af-468e-43a4-992f-b115fd13649e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8fbql\" (UID: \"1a8d06af-468e-43a4-992f-b115fd13649e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599187 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-trusted-ca-bundle\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599260 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480f682a-63f7-4ef6-b10c-29c34222269b-config\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599336 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv7r2\" (UniqueName: \"kubernetes.io/projected/c5734b55-b478-4525-b5da-88b63b4812d0-kube-api-access-dv7r2\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599413 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/774857d6-50c3-4ada-96ab-430dbeff8b0f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599478 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5734b55-b478-4525-b5da-88b63b4812d0-console-serving-cert\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599548 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-oauth-serving-cert\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599624 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/480f682a-63f7-4ef6-b10c-29c34222269b-images\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599699 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b8mg\" (UniqueName: \"kubernetes.io/projected/480f682a-63f7-4ef6-b10c-29c34222269b-kube-api-access-9b8mg\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599793 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9vxs\" (UniqueName: \"kubernetes.io/projected/36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b-kube-api-access-m9vxs\") pod \"dns-operator-744455d44c-r4nz4\" (UID: \"36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599863 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnzqg\" (UniqueName: \"kubernetes.io/projected/82daa056-c08b-4c56-817b-850b31cd016e-kube-api-access-vnzqg\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599931 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-config\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.600016 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-serving-cert\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.600120 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-node-pullsecrets\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598000 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.600278 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.600343 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.598580 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-etcd-serving-ca\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599299 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-auth-proxy-config\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.597829 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.599970 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/774857d6-50c3-4ada-96ab-430dbeff8b0f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601328 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/774857d6-50c3-4ada-96ab-430dbeff8b0f-audit-dir\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601360 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-client-ca\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601382 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-config\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601407 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-audit-dir\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601433 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfz9\" (UniqueName: \"kubernetes.io/projected/1a8d06af-468e-43a4-992f-b115fd13649e-kube-api-access-hvfz9\") pod \"machine-config-controller-84d6567774-8fbql\" (UID: \"1a8d06af-468e-43a4-992f-b115fd13649e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601453 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-client-ca\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601469 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-config\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601504 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-etcd-client\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601528 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39864b7b-d0d0-4cdc-992d-6045872983cb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jtfqg\" (UID: \"39864b7b-d0d0-4cdc-992d-6045872983cb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601551 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a8d06af-468e-43a4-992f-b115fd13649e-proxy-tls\") pod \"machine-config-controller-84d6567774-8fbql\" (UID: \"1a8d06af-468e-43a4-992f-b115fd13649e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.601656 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-node-pullsecrets\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.602288 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-audit\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.602302 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-image-import-ca\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.602741 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-config\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.602919 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/480f682a-63f7-4ef6-b10c-29c34222269b-images\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.603235 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-audit-dir\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.603255 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-config\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.603298 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/774857d6-50c3-4ada-96ab-430dbeff8b0f-audit-dir\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.603461 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/774857d6-50c3-4ada-96ab-430dbeff8b0f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.603836 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.603944 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39864b7b-d0d0-4cdc-992d-6045872983cb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jtfqg\" (UID: \"39864b7b-d0d0-4cdc-992d-6045872983cb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.604244 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/774857d6-50c3-4ada-96ab-430dbeff8b0f-audit-policies\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.604384 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-config\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.604446 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480f682a-63f7-4ef6-b10c-29c34222269b-config\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.604726 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-client-ca\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.604769 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e57bddb-aeda-470b-a095-ce0d84023e77-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ff9mx\" (UID: \"1e57bddb-aeda-470b-a095-ce0d84023e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.605028 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.605638 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-serving-cert\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.605642 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/774857d6-50c3-4ada-96ab-430dbeff8b0f-encryption-config\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.606342 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.606423 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-machine-approver-tls\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.606971 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.607419 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-serving-cert\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.607874 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r4nz4"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.607889 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39864b7b-d0d0-4cdc-992d-6045872983cb-serving-cert\") pod \"openshift-config-operator-7777fb866f-jtfqg\" (UID: \"39864b7b-d0d0-4cdc-992d-6045872983cb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.608219 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/774857d6-50c3-4ada-96ab-430dbeff8b0f-etcd-client\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.608845 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tdmwg"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.609519 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-etcd-client\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.610155 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.611091 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fjsbq"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.611650 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.612461 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nfwnt"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.613187 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.613448 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.614368 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.614895 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.615439 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rpwnp"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.617517 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.618612 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.619512 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/480f682a-63f7-4ef6-b10c-29c34222269b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.619698 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.620108 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-encryption-config\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.620778 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.621777 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.622099 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774857d6-50c3-4ada-96ab-430dbeff8b0f-serving-cert\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.622788 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqvkf"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.623777 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nfwnt"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.624781 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4ll78"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.625736 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.626674 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vblgj"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.627385 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vblgj" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.627769 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vblgj"] Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.636036 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.654378 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.675007 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.695108 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702219 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-console-config\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702327 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec07ab0-a220-458e-9fde-76c2f9f8cbd7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6k6lm\" (UID: \"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702361 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82daa056-c08b-4c56-817b-850b31cd016e-serving-cert\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702385 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-serving-cert\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702685 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a8d06af-468e-43a4-992f-b115fd13649e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8fbql\" (UID: \"1a8d06af-468e-43a4-992f-b115fd13649e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702727 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-trusted-ca-bundle\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702752 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv7r2\" (UniqueName: \"kubernetes.io/projected/c5734b55-b478-4525-b5da-88b63b4812d0-kube-api-access-dv7r2\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702776 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5734b55-b478-4525-b5da-88b63b4812d0-console-serving-cert\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702800 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-oauth-serving-cert\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702830 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9vxs\" (UniqueName: \"kubernetes.io/projected/36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b-kube-api-access-m9vxs\") pod \"dns-operator-744455d44c-r4nz4\" (UID: \"36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702848 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnzqg\" (UniqueName: \"kubernetes.io/projected/82daa056-c08b-4c56-817b-850b31cd016e-kube-api-access-vnzqg\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702878 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-config\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702912 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a8d06af-468e-43a4-992f-b115fd13649e-proxy-tls\") pod \"machine-config-controller-84d6567774-8fbql\" (UID: \"1a8d06af-468e-43a4-992f-b115fd13649e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702930 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvfz9\" (UniqueName: \"kubernetes.io/projected/1a8d06af-468e-43a4-992f-b115fd13649e-kube-api-access-hvfz9\") pod \"machine-config-controller-84d6567774-8fbql\" (UID: \"1a8d06af-468e-43a4-992f-b115fd13649e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702956 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-client-ca\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.702979 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-config\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703001 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eae28a17-d4a1-4b7b-8aed-9588514fd6e6-profile-collector-cert\") pod \"catalog-operator-68c6474976-cq6xq\" (UID: \"eae28a17-d4a1-4b7b-8aed-9588514fd6e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703020 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b-metrics-tls\") pod \"dns-operator-744455d44c-r4nz4\" (UID: \"36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703024 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec07ab0-a220-458e-9fde-76c2f9f8cbd7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6k6lm\" (UID: \"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703039 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-trusted-ca\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703255 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ec07ab0-a220-458e-9fde-76c2f9f8cbd7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6k6lm\" (UID: \"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703278 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-service-ca\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703417 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c5734b55-b478-4525-b5da-88b63b4812d0-console-oauth-config\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703440 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj4dz\" (UniqueName: \"kubernetes.io/projected/eae28a17-d4a1-4b7b-8aed-9588514fd6e6-kube-api-access-bj4dz\") pod \"catalog-operator-68c6474976-cq6xq\" (UID: \"eae28a17-d4a1-4b7b-8aed-9588514fd6e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703580 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndlms\" (UniqueName: \"kubernetes.io/projected/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-kube-api-access-ndlms\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703610 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmgcj\" (UniqueName: \"kubernetes.io/projected/6ec07ab0-a220-458e-9fde-76c2f9f8cbd7-kube-api-access-jmgcj\") pod \"openshift-apiserver-operator-796bbdcf4f-6k6lm\" (UID: \"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703701 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-console-config\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.703730 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eae28a17-d4a1-4b7b-8aed-9588514fd6e6-srv-cert\") pod \"catalog-operator-68c6474976-cq6xq\" (UID: \"eae28a17-d4a1-4b7b-8aed-9588514fd6e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.704150 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-trusted-ca-bundle\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.704205 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a8d06af-468e-43a4-992f-b115fd13649e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8fbql\" (UID: \"1a8d06af-468e-43a4-992f-b115fd13649e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.704208 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-oauth-serving-cert\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.704517 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-config\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.704811 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-client-ca\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.705340 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5734b55-b478-4525-b5da-88b63b4812d0-service-ca\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.705745 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-config\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.706244 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-trusted-ca\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.706858 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5734b55-b478-4525-b5da-88b63b4812d0-console-serving-cert\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.707202 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82daa056-c08b-4c56-817b-850b31cd016e-serving-cert\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.707312 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-serving-cert\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.707404 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b-metrics-tls\") pod \"dns-operator-744455d44c-r4nz4\" (UID: \"36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.707429 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c5734b55-b478-4525-b5da-88b63b4812d0-console-oauth-config\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.709743 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ec07ab0-a220-458e-9fde-76c2f9f8cbd7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6k6lm\" (UID: \"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.714985 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.734616 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.754855 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.775222 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.794661 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.814941 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.835367 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.855822 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.875252 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.895319 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.915122 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.935844 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.955353 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.974716 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 26 13:17:32 crc kubenswrapper[4747]: I1126 13:17:32.995026 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.009356 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a8d06af-468e-43a4-992f-b115fd13649e-proxy-tls\") pod \"machine-config-controller-84d6567774-8fbql\" (UID: \"1a8d06af-468e-43a4-992f-b115fd13649e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.015660 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.034853 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.057287 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.075442 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.099670 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.135791 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.154878 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.175594 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.195404 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.214836 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.220538 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eae28a17-d4a1-4b7b-8aed-9588514fd6e6-srv-cert\") pod \"catalog-operator-68c6474976-cq6xq\" (UID: \"eae28a17-d4a1-4b7b-8aed-9588514fd6e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.234942 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.248692 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eae28a17-d4a1-4b7b-8aed-9588514fd6e6-profile-collector-cert\") pod \"catalog-operator-68c6474976-cq6xq\" (UID: \"eae28a17-d4a1-4b7b-8aed-9588514fd6e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.255615 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.295478 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.314882 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.335784 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.355730 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.375526 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.396564 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.415781 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.450482 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.462495 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.474932 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.495771 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.515753 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.535476 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.555115 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.573881 4747 request.go:700] Waited for 1.017106593s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.576151 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.595810 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.615659 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.634787 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.655928 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.675749 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.695203 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.714926 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.736188 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.757894 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.775269 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.795768 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.815269 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.835767 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.855794 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.875672 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.895590 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.915231 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.934803 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.955277 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 26 13:17:33 crc kubenswrapper[4747]: I1126 13:17:33.975735 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.005651 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.015821 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.035851 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.055185 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.075653 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.094905 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.115376 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.134751 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.155716 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.175111 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.194887 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.214909 4747 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.235924 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.282665 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdnwp\" (UniqueName: \"kubernetes.io/projected/ebe9d962-0dd1-49af-a010-5f92d0bcad9f-kube-api-access-kdnwp\") pod \"machine-approver-56656f9798-jcxz6\" (UID: \"ebe9d962-0dd1-49af-a010-5f92d0bcad9f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.305383 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcdb5\" (UniqueName: \"kubernetes.io/projected/39864b7b-d0d0-4cdc-992d-6045872983cb-kube-api-access-lcdb5\") pod \"openshift-config-operator-7777fb866f-jtfqg\" (UID: \"39864b7b-d0d0-4cdc-992d-6045872983cb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.322216 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78866\" (UniqueName: \"kubernetes.io/projected/d1c32fed-c28d-42e8-9bfb-e67af83e8c0b-kube-api-access-78866\") pod \"apiserver-76f77b778f-x8z2x\" (UID: \"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b\") " pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.330237 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.339036 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.344756 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z92qx\" (UniqueName: \"kubernetes.io/projected/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-kube-api-access-z92qx\") pod \"controller-manager-879f6c89f-pwnxf\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:34 crc kubenswrapper[4747]: W1126 13:17:34.364152 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebe9d962_0dd1_49af_a010_5f92d0bcad9f.slice/crio-f3c2ccd7677a7cd1810a8f273d8d24f9c8b74af1085e70ccd4ea11f1d87e318a WatchSource:0}: Error finding container f3c2ccd7677a7cd1810a8f273d8d24f9c8b74af1085e70ccd4ea11f1d87e318a: Status 404 returned error can't find the container with id f3c2ccd7677a7cd1810a8f273d8d24f9c8b74af1085e70ccd4ea11f1d87e318a Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.367910 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxbn7\" (UniqueName: \"kubernetes.io/projected/1e57bddb-aeda-470b-a095-ce0d84023e77-kube-api-access-jxbn7\") pod \"cluster-samples-operator-665b6dd947-ff9mx\" (UID: \"1e57bddb-aeda-470b-a095-ce0d84023e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.386493 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd8pg\" (UniqueName: \"kubernetes.io/projected/774857d6-50c3-4ada-96ab-430dbeff8b0f-kube-api-access-kd8pg\") pod \"apiserver-7bbb656c7d-q5t5j\" (UID: \"774857d6-50c3-4ada-96ab-430dbeff8b0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.395694 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.402588 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b8mg\" (UniqueName: \"kubernetes.io/projected/480f682a-63f7-4ef6-b10c-29c34222269b-kube-api-access-9b8mg\") pod \"machine-api-operator-5694c8668f-pdp9j\" (UID: \"480f682a-63f7-4ef6-b10c-29c34222269b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.414927 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.436377 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.455890 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.476744 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.485996 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.496020 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.498181 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.516177 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.536128 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.555696 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.564675 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" event={"ID":"ebe9d962-0dd1-49af-a010-5f92d0bcad9f","Type":"ContainerStarted","Data":"f3c2ccd7677a7cd1810a8f273d8d24f9c8b74af1085e70ccd4ea11f1d87e318a"} Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.568376 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.574172 4747 request.go:700] Waited for 1.946378576s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.576564 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.601923 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.622783 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.635880 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg"] Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.642372 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv7r2\" (UniqueName: \"kubernetes.io/projected/c5734b55-b478-4525-b5da-88b63b4812d0-kube-api-access-dv7r2\") pod \"console-f9d7485db-tv788\" (UID: \"c5734b55-b478-4525-b5da-88b63b4812d0\") " pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.654145 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvfz9\" (UniqueName: \"kubernetes.io/projected/1a8d06af-468e-43a4-992f-b115fd13649e-kube-api-access-hvfz9\") pod \"machine-config-controller-84d6567774-8fbql\" (UID: \"1a8d06af-468e-43a4-992f-b115fd13649e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.657735 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.666264 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9vxs\" (UniqueName: \"kubernetes.io/projected/36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b-kube-api-access-m9vxs\") pod \"dns-operator-744455d44c-r4nz4\" (UID: \"36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b\") " pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.673514 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj4dz\" (UniqueName: \"kubernetes.io/projected/eae28a17-d4a1-4b7b-8aed-9588514fd6e6-kube-api-access-bj4dz\") pod \"catalog-operator-68c6474976-cq6xq\" (UID: \"eae28a17-d4a1-4b7b-8aed-9588514fd6e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.690782 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndlms\" (UniqueName: \"kubernetes.io/projected/5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a-kube-api-access-ndlms\") pod \"console-operator-58897d9998-qvvvm\" (UID: \"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a\") " pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.711107 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmgcj\" (UniqueName: \"kubernetes.io/projected/6ec07ab0-a220-458e-9fde-76c2f9f8cbd7-kube-api-access-jmgcj\") pod \"openshift-apiserver-operator-796bbdcf4f-6k6lm\" (UID: \"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.731750 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnzqg\" (UniqueName: \"kubernetes.io/projected/82daa056-c08b-4c56-817b-850b31cd016e-kube-api-access-vnzqg\") pod \"route-controller-manager-6576b87f9c-sgvf9\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.746294 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.746827 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pwnxf"] Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.771881 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.786807 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x8z2x"] Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.786928 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.791845 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.823174 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pdp9j"] Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.831867 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839205 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e375034-28f3-4050-8e6a-8d6edc3abe02-serving-cert\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839244 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gv7z\" (UniqueName: \"kubernetes.io/projected/b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e-kube-api-access-4gv7z\") pod \"multus-admission-controller-857f4d67dd-x8d24\" (UID: \"b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839299 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-trusted-ca\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839326 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frzfx\" (UniqueName: \"kubernetes.io/projected/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-kube-api-access-frzfx\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839350 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x8d24\" (UID: \"b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839373 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m8hp\" (UniqueName: \"kubernetes.io/projected/9e375034-28f3-4050-8e6a-8d6edc3abe02-kube-api-access-5m8hp\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839398 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a884ca-524b-4ff7-b955-c9b207e2861d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wjngl\" (UID: \"e1a884ca-524b-4ff7-b955-c9b207e2861d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839424 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea6220c2-d975-49c7-86c4-d71c809cc426-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839446 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839469 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839493 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ce8782-9463-46a1-b5ea-73c6d3c01589-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pk2g\" (UID: \"38ce8782-9463-46a1-b5ea-73c6d3c01589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839518 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0882051f-62cd-4cbd-a3b5-561072a04aeb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839555 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0882051f-62cd-4cbd-a3b5-561072a04aeb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839591 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhj4s\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-kube-api-access-vhj4s\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839621 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839696 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e375034-28f3-4050-8e6a-8d6edc3abe02-config\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839717 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6037b8-8286-4388-b575-81d0d1f698c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gvkb7\" (UID: \"fc6037b8-8286-4388-b575-81d0d1f698c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839742 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-etcd-service-ca\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839765 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6037b8-8286-4388-b575-81d0d1f698c8-config\") pod \"kube-controller-manager-operator-78b949d7b-gvkb7\" (UID: \"fc6037b8-8286-4388-b575-81d0d1f698c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839788 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e375034-28f3-4050-8e6a-8d6edc3abe02-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839818 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aa005c4-b69f-4bb7-90e6-c2f58210f9d3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jq2nt\" (UID: \"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839850 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-config\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839870 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839892 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a884ca-524b-4ff7-b955-c9b207e2861d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wjngl\" (UID: \"e1a884ca-524b-4ff7-b955-c9b207e2861d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839916 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-tls\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839936 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839974 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.839995 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38ce8782-9463-46a1-b5ea-73c6d3c01589-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pk2g\" (UID: \"38ce8782-9463-46a1-b5ea-73c6d3c01589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840014 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-serving-cert\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840034 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-etcd-ca\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840075 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea6220c2-d975-49c7-86c4-d71c809cc426-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840098 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc6037b8-8286-4388-b575-81d0d1f698c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gvkb7\" (UID: \"fc6037b8-8286-4388-b575-81d0d1f698c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840117 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-etcd-client\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840138 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-certificates\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840158 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-bound-sa-token\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840178 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncj6m\" (UniqueName: \"kubernetes.io/projected/e1a884ca-524b-4ff7-b955-c9b207e2861d-kube-api-access-ncj6m\") pod \"openshift-controller-manager-operator-756b6f6bc6-wjngl\" (UID: \"e1a884ca-524b-4ff7-b955-c9b207e2861d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840203 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-884zq\" (UniqueName: \"kubernetes.io/projected/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-kube-api-access-884zq\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840228 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa005c4-b69f-4bb7-90e6-c2f58210f9d3-config\") pod \"kube-apiserver-operator-766d6c64bb-jq2nt\" (UID: \"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840270 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0aa005c4-b69f-4bb7-90e6-c2f58210f9d3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jq2nt\" (UID: \"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840292 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsrmt\" (UniqueName: \"kubernetes.io/projected/1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba-kube-api-access-wsrmt\") pod \"downloads-7954f5f757-mr6p4\" (UID: \"1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba\") " pod="openshift-console/downloads-7954f5f757-mr6p4" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840316 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtshk\" (UniqueName: \"kubernetes.io/projected/0882051f-62cd-4cbd-a3b5-561072a04aeb-kube-api-access-rtshk\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840342 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840368 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38ce8782-9463-46a1-b5ea-73c6d3c01589-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pk2g\" (UID: \"38ce8782-9463-46a1-b5ea-73c6d3c01589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840483 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840537 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840575 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840595 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840678 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-policies\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840707 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-dir\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840762 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0882051f-62cd-4cbd-a3b5-561072a04aeb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840789 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.840803 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e375034-28f3-4050-8e6a-8d6edc3abe02-service-ca-bundle\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: E1126 13:17:34.841576 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:35.341561621 +0000 UTC m=+142.327872736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.851608 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:34 crc kubenswrapper[4747]: W1126 13:17:34.856761 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod480f682a_63f7_4ef6_b10c_29c34222269b.slice/crio-3b2cf5a77c26e614f4f39d8073e017b640e9c4b0612f28e7ed54f20e3aac4f15 WatchSource:0}: Error finding container 3b2cf5a77c26e614f4f39d8073e017b640e9c4b0612f28e7ed54f20e3aac4f15: Status 404 returned error can't find the container with id 3b2cf5a77c26e614f4f39d8073e017b640e9c4b0612f28e7ed54f20e3aac4f15 Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942450 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942575 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc6037b8-8286-4388-b575-81d0d1f698c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gvkb7\" (UID: \"fc6037b8-8286-4388-b575-81d0d1f698c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942601 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99bc3650-9d3c-4b40-bc41-cd10a68378e8-trusted-ca\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942620 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea6220c2-d975-49c7-86c4-d71c809cc426-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942635 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-certificates\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942650 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-bound-sa-token\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942663 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-884zq\" (UniqueName: \"kubernetes.io/projected/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-kube-api-access-884zq\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942689 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-plugins-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942707 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa005c4-b69f-4bb7-90e6-c2f58210f9d3-config\") pod \"kube-apiserver-operator-766d6c64bb-jq2nt\" (UID: \"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942724 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0aa005c4-b69f-4bb7-90e6-c2f58210f9d3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jq2nt\" (UID: \"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942742 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zqvkf\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942759 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsrmt\" (UniqueName: \"kubernetes.io/projected/1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba-kube-api-access-wsrmt\") pod \"downloads-7954f5f757-mr6p4\" (UID: \"1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba\") " pod="openshift-console/downloads-7954f5f757-mr6p4" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942775 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cba76ac-e300-4f6f-b100-1546b5bbd85b-serving-cert\") pod \"service-ca-operator-777779d784-gbnb2\" (UID: \"6cba76ac-e300-4f6f-b100-1546b5bbd85b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942804 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942829 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnx77\" (UniqueName: \"kubernetes.io/projected/aedc5169-b6a2-4ffe-8845-099fc5b6f9c1-kube-api-access-fnx77\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxqxh\" (UID: \"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942850 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d11da9a-1232-421f-9b58-63cb5d519a0a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qrchs\" (UID: \"1d11da9a-1232-421f-9b58-63cb5d519a0a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942900 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-dir\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942916 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0882051f-62cd-4cbd-a3b5-561072a04aeb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942948 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zdvc\" (UniqueName: \"kubernetes.io/projected/f637a777-d9a6-44b1-a6fd-de227846cf5b-kube-api-access-2zdvc\") pod \"control-plane-machine-set-operator-78cbb6b69f-zw88v\" (UID: \"f637a777-d9a6-44b1-a6fd-de227846cf5b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.942984 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99bc3650-9d3c-4b40-bc41-cd10a68378e8-metrics-tls\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943010 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e375034-28f3-4050-8e6a-8d6edc3abe02-serving-cert\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943035 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-trusted-ca\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943078 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frzfx\" (UniqueName: \"kubernetes.io/projected/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-kube-api-access-frzfx\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943104 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzwxr\" (UniqueName: \"kubernetes.io/projected/99bc3650-9d3c-4b40-bc41-cd10a68378e8-kube-api-access-bzwxr\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943126 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66ee9991-c7b9-4f4d-a995-6dcbce726841-config-volume\") pod \"dns-default-nfwnt\" (UID: \"66ee9991-c7b9-4f4d-a995-6dcbce726841\") " pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943149 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a884ca-524b-4ff7-b955-c9b207e2861d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wjngl\" (UID: \"e1a884ca-524b-4ff7-b955-c9b207e2861d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943178 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e3f6ec2d-f94c-434b-8072-2beaee292fb6-node-bootstrap-token\") pod \"machine-config-server-fjsbq\" (UID: \"e3f6ec2d-f94c-434b-8072-2beaee292fb6\") " pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943200 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/78884d47-16c2-4ce8-8986-814701e6f244-signing-cabundle\") pod \"service-ca-9c57cc56f-rpwnp\" (UID: \"78884d47-16c2-4ce8-8986-814701e6f244\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943223 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943244 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66ee9991-c7b9-4f4d-a995-6dcbce726841-metrics-tls\") pod \"dns-default-nfwnt\" (UID: \"66ee9991-c7b9-4f4d-a995-6dcbce726841\") " pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943268 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ce8782-9463-46a1-b5ea-73c6d3c01589-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pk2g\" (UID: \"38ce8782-9463-46a1-b5ea-73c6d3c01589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943291 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0882051f-62cd-4cbd-a3b5-561072a04aeb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943314 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz9v7\" (UniqueName: \"kubernetes.io/projected/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-kube-api-access-zz9v7\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943337 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/78884d47-16c2-4ce8-8986-814701e6f244-signing-key\") pod \"service-ca-9c57cc56f-rpwnp\" (UID: \"78884d47-16c2-4ce8-8986-814701e6f244\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943372 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97e670ff-0ad9-4b6b-9017-91aa22702320-apiservice-cert\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943393 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-socket-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943414 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdmhh\" (UniqueName: \"kubernetes.io/projected/e3f6ec2d-f94c-434b-8072-2beaee292fb6-kube-api-access-rdmhh\") pod \"machine-config-server-fjsbq\" (UID: \"e3f6ec2d-f94c-434b-8072-2beaee292fb6\") " pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943437 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zqvkf\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943458 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cba76ac-e300-4f6f-b100-1546b5bbd85b-config\") pod \"service-ca-operator-777779d784-gbnb2\" (UID: \"6cba76ac-e300-4f6f-b100-1546b5bbd85b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943482 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4395cb3b-b843-4c5b-8312-adcd0887d777-metrics-certs\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943505 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e375034-28f3-4050-8e6a-8d6edc3abe02-config\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943530 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeb24804-3be2-46c5-b1a6-494b7b271aee-config-volume\") pod \"collect-profiles-29402715-vsvbk\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943552 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aeb24804-3be2-46c5-b1a6-494b7b271aee-secret-volume\") pod \"collect-profiles-29402715-vsvbk\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943575 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-etcd-service-ca\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943598 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2gt\" (UniqueName: \"kubernetes.io/projected/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-kube-api-access-nn2gt\") pod \"marketplace-operator-79b997595-zqvkf\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943619 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d11da9a-1232-421f-9b58-63cb5d519a0a-srv-cert\") pod \"olm-operator-6b444d44fb-qrchs\" (UID: \"1d11da9a-1232-421f-9b58-63cb5d519a0a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943642 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2da97ad-2e38-47fe-8223-dd5fac723e66-cert\") pod \"ingress-canary-vblgj\" (UID: \"e2da97ad-2e38-47fe-8223-dd5fac723e66\") " pod="openshift-ingress-canary/ingress-canary-vblgj" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943664 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4kx\" (UniqueName: \"kubernetes.io/projected/3dbff5f0-a78d-4a15-9652-f01b1e882a42-kube-api-access-jt4kx\") pod \"migrator-59844c95c7-scvdh\" (UID: \"3dbff5f0-a78d-4a15-9652-f01b1e882a42\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943712 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e375034-28f3-4050-8e6a-8d6edc3abe02-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943735 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aa005c4-b69f-4bb7-90e6-c2f58210f9d3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jq2nt\" (UID: \"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943794 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6x5t\" (UniqueName: \"kubernetes.io/projected/4395cb3b-b843-4c5b-8312-adcd0887d777-kube-api-access-v6x5t\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943819 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-config\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943841 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943862 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97e670ff-0ad9-4b6b-9017-91aa22702320-webhook-cert\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943882 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-mountpoint-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943905 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wlsg\" (UniqueName: \"kubernetes.io/projected/1d11da9a-1232-421f-9b58-63cb5d519a0a-kube-api-access-5wlsg\") pod \"olm-operator-6b444d44fb-qrchs\" (UID: \"1d11da9a-1232-421f-9b58-63cb5d519a0a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943927 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97e670ff-0ad9-4b6b-9017-91aa22702320-tmpfs\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943961 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4395cb3b-b843-4c5b-8312-adcd0887d777-service-ca-bundle\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.943982 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4395cb3b-b843-4c5b-8312-adcd0887d777-stats-auth\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944004 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944262 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4395cb3b-b843-4c5b-8312-adcd0887d777-default-certificate\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944299 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-etcd-client\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944323 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncj6m\" (UniqueName: \"kubernetes.io/projected/e1a884ca-524b-4ff7-b955-c9b207e2861d-kube-api-access-ncj6m\") pod \"openshift-controller-manager-operator-756b6f6bc6-wjngl\" (UID: \"e1a884ca-524b-4ff7-b955-c9b207e2861d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944377 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99bc3650-9d3c-4b40-bc41-cd10a68378e8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944414 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtshk\" (UniqueName: \"kubernetes.io/projected/0882051f-62cd-4cbd-a3b5-561072a04aeb-kube-api-access-rtshk\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944450 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38ce8782-9463-46a1-b5ea-73c6d3c01589-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pk2g\" (UID: \"38ce8782-9463-46a1-b5ea-73c6d3c01589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944472 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-proxy-tls\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944498 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944521 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944545 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4znjx\" (UniqueName: \"kubernetes.io/projected/78884d47-16c2-4ce8-8986-814701e6f244-kube-api-access-4znjx\") pod \"service-ca-9c57cc56f-rpwnp\" (UID: \"78884d47-16c2-4ce8-8986-814701e6f244\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944576 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944600 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcsqx\" (UniqueName: \"kubernetes.io/projected/aeb24804-3be2-46c5-b1a6-494b7b271aee-kube-api-access-tcsqx\") pod \"collect-profiles-29402715-vsvbk\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944623 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-policies\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944645 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944669 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e375034-28f3-4050-8e6a-8d6edc3abe02-service-ca-bundle\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944710 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gv7z\" (UniqueName: \"kubernetes.io/projected/b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e-kube-api-access-4gv7z\") pod \"multus-admission-controller-857f4d67dd-x8d24\" (UID: \"b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944736 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aedc5169-b6a2-4ffe-8845-099fc5b6f9c1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxqxh\" (UID: \"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944760 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x8d24\" (UID: \"b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944783 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea6220c2-d975-49c7-86c4-d71c809cc426-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944808 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944832 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m8hp\" (UniqueName: \"kubernetes.io/projected/9e375034-28f3-4050-8e6a-8d6edc3abe02-kube-api-access-5m8hp\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944854 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-csi-data-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944894 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0882051f-62cd-4cbd-a3b5-561072a04aeb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944917 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e3f6ec2d-f94c-434b-8072-2beaee292fb6-certs\") pod \"machine-config-server-fjsbq\" (UID: \"e3f6ec2d-f94c-434b-8072-2beaee292fb6\") " pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944939 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km8fl\" (UniqueName: \"kubernetes.io/projected/97e670ff-0ad9-4b6b-9017-91aa22702320-kube-api-access-km8fl\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.944988 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhj4s\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-kube-api-access-vhj4s\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945012 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsl9c\" (UniqueName: \"kubernetes.io/projected/e2da97ad-2e38-47fe-8223-dd5fac723e66-kube-api-access-xsl9c\") pod \"ingress-canary-vblgj\" (UID: \"e2da97ad-2e38-47fe-8223-dd5fac723e66\") " pod="openshift-ingress-canary/ingress-canary-vblgj" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945035 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945126 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6037b8-8286-4388-b575-81d0d1f698c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gvkb7\" (UID: \"fc6037b8-8286-4388-b575-81d0d1f698c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945151 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-967h8\" (UniqueName: \"kubernetes.io/projected/6cba76ac-e300-4f6f-b100-1546b5bbd85b-kube-api-access-967h8\") pod \"service-ca-operator-777779d784-gbnb2\" (UID: \"6cba76ac-e300-4f6f-b100-1546b5bbd85b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945208 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl8md\" (UniqueName: \"kubernetes.io/projected/2ce131be-f5b6-4812-8e89-ed2702d6f47f-kube-api-access-vl8md\") pod \"package-server-manager-789f6589d5-qpslg\" (UID: \"2ce131be-f5b6-4812-8e89-ed2702d6f47f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945237 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6037b8-8286-4388-b575-81d0d1f698c8-config\") pod \"kube-controller-manager-operator-78b949d7b-gvkb7\" (UID: \"fc6037b8-8286-4388-b575-81d0d1f698c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945272 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kznfx\" (UniqueName: \"kubernetes.io/projected/66ee9991-c7b9-4f4d-a995-6dcbce726841-kube-api-access-kznfx\") pod \"dns-default-nfwnt\" (UID: \"66ee9991-c7b9-4f4d-a995-6dcbce726841\") " pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945335 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f637a777-d9a6-44b1-a6fd-de227846cf5b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zw88v\" (UID: \"f637a777-d9a6-44b1-a6fd-de227846cf5b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945359 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aedc5169-b6a2-4ffe-8845-099fc5b6f9c1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxqxh\" (UID: \"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945391 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945414 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce131be-f5b6-4812-8e89-ed2702d6f47f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qpslg\" (UID: \"2ce131be-f5b6-4812-8e89-ed2702d6f47f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945432 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a884ca-524b-4ff7-b955-c9b207e2861d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wjngl\" (UID: \"e1a884ca-524b-4ff7-b955-c9b207e2861d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945448 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-registration-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945464 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvrs\" (UniqueName: \"kubernetes.io/projected/1fe80ce7-7278-4c19-b658-1ec3336280e8-kube-api-access-vlvrs\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945489 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-tls\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945517 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945540 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38ce8782-9463-46a1-b5ea-73c6d3c01589-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pk2g\" (UID: \"38ce8782-9463-46a1-b5ea-73c6d3c01589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945563 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-serving-cert\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945583 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-etcd-ca\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.945604 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-images\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:34 crc kubenswrapper[4747]: E1126 13:17:34.945943 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:35.445924915 +0000 UTC m=+142.432235950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.947145 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea6220c2-d975-49c7-86c4-d71c809cc426-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.948747 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-certificates\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.948923 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ce8782-9463-46a1-b5ea-73c6d3c01589-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pk2g\" (UID: \"38ce8782-9463-46a1-b5ea-73c6d3c01589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.950709 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e375034-28f3-4050-8e6a-8d6edc3abe02-config\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.951730 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-etcd-service-ca\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.953036 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa005c4-b69f-4bb7-90e6-c2f58210f9d3-config\") pod \"kube-apiserver-operator-766d6c64bb-jq2nt\" (UID: \"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.954005 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.954543 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e375034-28f3-4050-8e6a-8d6edc3abe02-service-ca-bundle\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.954533 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0882051f-62cd-4cbd-a3b5-561072a04aeb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.955444 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.956903 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6037b8-8286-4388-b575-81d0d1f698c8-config\") pod \"kube-controller-manager-operator-78b949d7b-gvkb7\" (UID: \"fc6037b8-8286-4388-b575-81d0d1f698c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.957140 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e375034-28f3-4050-8e6a-8d6edc3abe02-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.951762 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-policies\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.957776 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-trusted-ca\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.960557 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a884ca-524b-4ff7-b955-c9b207e2861d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wjngl\" (UID: \"e1a884ca-524b-4ff7-b955-c9b207e2861d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.961088 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.961693 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.964013 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.966391 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea6220c2-d975-49c7-86c4-d71c809cc426-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.966422 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6037b8-8286-4388-b575-81d0d1f698c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gvkb7\" (UID: \"fc6037b8-8286-4388-b575-81d0d1f698c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.966596 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a884ca-524b-4ff7-b955-c9b207e2861d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wjngl\" (UID: \"e1a884ca-524b-4ff7-b955-c9b207e2861d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.966781 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.969081 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e375034-28f3-4050-8e6a-8d6edc3abe02-serving-cert\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.969782 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-config\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.971396 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-etcd-client\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.972253 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-etcd-ca\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.972820 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-dir\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.973412 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.976949 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.979786 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.984890 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aa005c4-b69f-4bb7-90e6-c2f58210f9d3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jq2nt\" (UID: \"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.985572 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x8d24\" (UID: \"b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.986199 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.986330 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38ce8782-9463-46a1-b5ea-73c6d3c01589-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pk2g\" (UID: \"38ce8782-9463-46a1-b5ea-73c6d3c01589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.987207 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-tls\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.989890 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.994593 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-serving-cert\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.996483 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc6037b8-8286-4388-b575-81d0d1f698c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gvkb7\" (UID: \"fc6037b8-8286-4388-b575-81d0d1f698c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:34 crc kubenswrapper[4747]: I1126 13:17:34.997162 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0882051f-62cd-4cbd-a3b5-561072a04aeb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.013021 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-bound-sa-token\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.034193 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-884zq\" (UniqueName: \"kubernetes.io/projected/616d1d14-e2cc-473f-98d9-9ed776ebfd4e-kube-api-access-884zq\") pod \"etcd-operator-b45778765-6zcnr\" (UID: \"616d1d14-e2cc-473f-98d9-9ed776ebfd4e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047629 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4395cb3b-b843-4c5b-8312-adcd0887d777-default-certificate\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047683 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99bc3650-9d3c-4b40-bc41-cd10a68378e8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047700 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-proxy-tls\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047719 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4znjx\" (UniqueName: \"kubernetes.io/projected/78884d47-16c2-4ce8-8986-814701e6f244-kube-api-access-4znjx\") pod \"service-ca-9c57cc56f-rpwnp\" (UID: \"78884d47-16c2-4ce8-8986-814701e6f244\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047742 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047764 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcsqx\" (UniqueName: \"kubernetes.io/projected/aeb24804-3be2-46c5-b1a6-494b7b271aee-kube-api-access-tcsqx\") pod \"collect-profiles-29402715-vsvbk\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047794 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aedc5169-b6a2-4ffe-8845-099fc5b6f9c1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxqxh\" (UID: \"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047816 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-csi-data-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047836 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e3f6ec2d-f94c-434b-8072-2beaee292fb6-certs\") pod \"machine-config-server-fjsbq\" (UID: \"e3f6ec2d-f94c-434b-8072-2beaee292fb6\") " pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047851 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km8fl\" (UniqueName: \"kubernetes.io/projected/97e670ff-0ad9-4b6b-9017-91aa22702320-kube-api-access-km8fl\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047891 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsl9c\" (UniqueName: \"kubernetes.io/projected/e2da97ad-2e38-47fe-8223-dd5fac723e66-kube-api-access-xsl9c\") pod \"ingress-canary-vblgj\" (UID: \"e2da97ad-2e38-47fe-8223-dd5fac723e66\") " pod="openshift-ingress-canary/ingress-canary-vblgj" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047914 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-967h8\" (UniqueName: \"kubernetes.io/projected/6cba76ac-e300-4f6f-b100-1546b5bbd85b-kube-api-access-967h8\") pod \"service-ca-operator-777779d784-gbnb2\" (UID: \"6cba76ac-e300-4f6f-b100-1546b5bbd85b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047929 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8md\" (UniqueName: \"kubernetes.io/projected/2ce131be-f5b6-4812-8e89-ed2702d6f47f-kube-api-access-vl8md\") pod \"package-server-manager-789f6589d5-qpslg\" (UID: \"2ce131be-f5b6-4812-8e89-ed2702d6f47f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047947 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kznfx\" (UniqueName: \"kubernetes.io/projected/66ee9991-c7b9-4f4d-a995-6dcbce726841-kube-api-access-kznfx\") pod \"dns-default-nfwnt\" (UID: \"66ee9991-c7b9-4f4d-a995-6dcbce726841\") " pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047964 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f637a777-d9a6-44b1-a6fd-de227846cf5b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zw88v\" (UID: \"f637a777-d9a6-44b1-a6fd-de227846cf5b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.047989 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aedc5169-b6a2-4ffe-8845-099fc5b6f9c1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxqxh\" (UID: \"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048004 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048020 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce131be-f5b6-4812-8e89-ed2702d6f47f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qpslg\" (UID: \"2ce131be-f5b6-4812-8e89-ed2702d6f47f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048039 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-registration-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048070 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvrs\" (UniqueName: \"kubernetes.io/projected/1fe80ce7-7278-4c19-b658-1ec3336280e8-kube-api-access-vlvrs\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048094 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-images\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048110 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99bc3650-9d3c-4b40-bc41-cd10a68378e8-trusted-ca\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048128 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-plugins-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048149 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zqvkf\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048171 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cba76ac-e300-4f6f-b100-1546b5bbd85b-serving-cert\") pod \"service-ca-operator-777779d784-gbnb2\" (UID: \"6cba76ac-e300-4f6f-b100-1546b5bbd85b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048188 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d11da9a-1232-421f-9b58-63cb5d519a0a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qrchs\" (UID: \"1d11da9a-1232-421f-9b58-63cb5d519a0a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048206 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnx77\" (UniqueName: \"kubernetes.io/projected/aedc5169-b6a2-4ffe-8845-099fc5b6f9c1-kube-api-access-fnx77\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxqxh\" (UID: \"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048229 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zdvc\" (UniqueName: \"kubernetes.io/projected/f637a777-d9a6-44b1-a6fd-de227846cf5b-kube-api-access-2zdvc\") pod \"control-plane-machine-set-operator-78cbb6b69f-zw88v\" (UID: \"f637a777-d9a6-44b1-a6fd-de227846cf5b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048245 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99bc3650-9d3c-4b40-bc41-cd10a68378e8-metrics-tls\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048272 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzwxr\" (UniqueName: \"kubernetes.io/projected/99bc3650-9d3c-4b40-bc41-cd10a68378e8-kube-api-access-bzwxr\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048287 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e3f6ec2d-f94c-434b-8072-2beaee292fb6-node-bootstrap-token\") pod \"machine-config-server-fjsbq\" (UID: \"e3f6ec2d-f94c-434b-8072-2beaee292fb6\") " pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048303 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66ee9991-c7b9-4f4d-a995-6dcbce726841-config-volume\") pod \"dns-default-nfwnt\" (UID: \"66ee9991-c7b9-4f4d-a995-6dcbce726841\") " pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048324 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66ee9991-c7b9-4f4d-a995-6dcbce726841-metrics-tls\") pod \"dns-default-nfwnt\" (UID: \"66ee9991-c7b9-4f4d-a995-6dcbce726841\") " pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048344 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/78884d47-16c2-4ce8-8986-814701e6f244-signing-cabundle\") pod \"service-ca-9c57cc56f-rpwnp\" (UID: \"78884d47-16c2-4ce8-8986-814701e6f244\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048362 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz9v7\" (UniqueName: \"kubernetes.io/projected/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-kube-api-access-zz9v7\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048377 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/78884d47-16c2-4ce8-8986-814701e6f244-signing-key\") pod \"service-ca-9c57cc56f-rpwnp\" (UID: \"78884d47-16c2-4ce8-8986-814701e6f244\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048391 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97e670ff-0ad9-4b6b-9017-91aa22702320-apiservice-cert\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048406 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-socket-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048422 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdmhh\" (UniqueName: \"kubernetes.io/projected/e3f6ec2d-f94c-434b-8072-2beaee292fb6-kube-api-access-rdmhh\") pod \"machine-config-server-fjsbq\" (UID: \"e3f6ec2d-f94c-434b-8072-2beaee292fb6\") " pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048438 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zqvkf\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048451 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cba76ac-e300-4f6f-b100-1546b5bbd85b-config\") pod \"service-ca-operator-777779d784-gbnb2\" (UID: \"6cba76ac-e300-4f6f-b100-1546b5bbd85b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048475 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4395cb3b-b843-4c5b-8312-adcd0887d777-metrics-certs\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048492 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeb24804-3be2-46c5-b1a6-494b7b271aee-config-volume\") pod \"collect-profiles-29402715-vsvbk\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048506 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aeb24804-3be2-46c5-b1a6-494b7b271aee-secret-volume\") pod \"collect-profiles-29402715-vsvbk\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048523 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn2gt\" (UniqueName: \"kubernetes.io/projected/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-kube-api-access-nn2gt\") pod \"marketplace-operator-79b997595-zqvkf\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048537 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d11da9a-1232-421f-9b58-63cb5d519a0a-srv-cert\") pod \"olm-operator-6b444d44fb-qrchs\" (UID: \"1d11da9a-1232-421f-9b58-63cb5d519a0a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048553 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt4kx\" (UniqueName: \"kubernetes.io/projected/3dbff5f0-a78d-4a15-9652-f01b1e882a42-kube-api-access-jt4kx\") pod \"migrator-59844c95c7-scvdh\" (UID: \"3dbff5f0-a78d-4a15-9652-f01b1e882a42\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048567 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2da97ad-2e38-47fe-8223-dd5fac723e66-cert\") pod \"ingress-canary-vblgj\" (UID: \"e2da97ad-2e38-47fe-8223-dd5fac723e66\") " pod="openshift-ingress-canary/ingress-canary-vblgj" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048590 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6x5t\" (UniqueName: \"kubernetes.io/projected/4395cb3b-b843-4c5b-8312-adcd0887d777-kube-api-access-v6x5t\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048604 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97e670ff-0ad9-4b6b-9017-91aa22702320-webhook-cert\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048619 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-mountpoint-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048633 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wlsg\" (UniqueName: \"kubernetes.io/projected/1d11da9a-1232-421f-9b58-63cb5d519a0a-kube-api-access-5wlsg\") pod \"olm-operator-6b444d44fb-qrchs\" (UID: \"1d11da9a-1232-421f-9b58-63cb5d519a0a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048647 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97e670ff-0ad9-4b6b-9017-91aa22702320-tmpfs\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048663 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4395cb3b-b843-4c5b-8312-adcd0887d777-service-ca-bundle\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.048677 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4395cb3b-b843-4c5b-8312-adcd0887d777-stats-auth\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.052596 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-csi-data-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.053078 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:35.553047398 +0000 UTC m=+142.539358413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.054568 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cba76ac-e300-4f6f-b100-1546b5bbd85b-config\") pod \"service-ca-operator-777779d784-gbnb2\" (UID: \"6cba76ac-e300-4f6f-b100-1546b5bbd85b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.054979 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4395cb3b-b843-4c5b-8312-adcd0887d777-stats-auth\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.055655 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/78884d47-16c2-4ce8-8986-814701e6f244-signing-cabundle\") pod \"service-ca-9c57cc56f-rpwnp\" (UID: \"78884d47-16c2-4ce8-8986-814701e6f244\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.055710 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aedc5169-b6a2-4ffe-8845-099fc5b6f9c1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxqxh\" (UID: \"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.056184 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-proxy-tls\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.056348 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.056410 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-socket-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.056944 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-registration-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.056985 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-plugins-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.057365 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66ee9991-c7b9-4f4d-a995-6dcbce726841-config-volume\") pod \"dns-default-nfwnt\" (UID: \"66ee9991-c7b9-4f4d-a995-6dcbce726841\") " pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.057416 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-images\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.058333 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99bc3650-9d3c-4b40-bc41-cd10a68378e8-trusted-ca\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.058421 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1fe80ce7-7278-4c19-b658-1ec3336280e8-mountpoint-dir\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.058680 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97e670ff-0ad9-4b6b-9017-91aa22702320-tmpfs\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.059558 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d11da9a-1232-421f-9b58-63cb5d519a0a-srv-cert\") pod \"olm-operator-6b444d44fb-qrchs\" (UID: \"1d11da9a-1232-421f-9b58-63cb5d519a0a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.060604 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeb24804-3be2-46c5-b1a6-494b7b271aee-config-volume\") pod \"collect-profiles-29402715-vsvbk\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.060787 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4395cb3b-b843-4c5b-8312-adcd0887d777-service-ca-bundle\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.062849 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zqvkf\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.063690 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4395cb3b-b843-4c5b-8312-adcd0887d777-metrics-certs\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.064371 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4395cb3b-b843-4c5b-8312-adcd0887d777-default-certificate\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.067140 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aedc5169-b6a2-4ffe-8845-099fc5b6f9c1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxqxh\" (UID: \"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.067195 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97e670ff-0ad9-4b6b-9017-91aa22702320-apiservice-cert\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.067419 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce131be-f5b6-4812-8e89-ed2702d6f47f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qpslg\" (UID: \"2ce131be-f5b6-4812-8e89-ed2702d6f47f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.067603 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e3f6ec2d-f94c-434b-8072-2beaee292fb6-certs\") pod \"machine-config-server-fjsbq\" (UID: \"e3f6ec2d-f94c-434b-8072-2beaee292fb6\") " pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.067782 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhj4s\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-kube-api-access-vhj4s\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.069042 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zqvkf\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.070384 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97e670ff-0ad9-4b6b-9017-91aa22702320-webhook-cert\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.070954 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aeb24804-3be2-46c5-b1a6-494b7b271aee-secret-volume\") pod \"collect-profiles-29402715-vsvbk\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.071045 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2da97ad-2e38-47fe-8223-dd5fac723e66-cert\") pod \"ingress-canary-vblgj\" (UID: \"e2da97ad-2e38-47fe-8223-dd5fac723e66\") " pod="openshift-ingress-canary/ingress-canary-vblgj" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.071371 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cba76ac-e300-4f6f-b100-1546b5bbd85b-serving-cert\") pod \"service-ca-operator-777779d784-gbnb2\" (UID: \"6cba76ac-e300-4f6f-b100-1546b5bbd85b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.071398 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99bc3650-9d3c-4b40-bc41-cd10a68378e8-metrics-tls\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.071681 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/78884d47-16c2-4ce8-8986-814701e6f244-signing-key\") pod \"service-ca-9c57cc56f-rpwnp\" (UID: \"78884d47-16c2-4ce8-8986-814701e6f244\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.072026 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f637a777-d9a6-44b1-a6fd-de227846cf5b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zw88v\" (UID: \"f637a777-d9a6-44b1-a6fd-de227846cf5b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.074319 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66ee9991-c7b9-4f4d-a995-6dcbce726841-metrics-tls\") pod \"dns-default-nfwnt\" (UID: \"66ee9991-c7b9-4f4d-a995-6dcbce726841\") " pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.074455 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d11da9a-1232-421f-9b58-63cb5d519a0a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qrchs\" (UID: \"1d11da9a-1232-421f-9b58-63cb5d519a0a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.074798 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m8hp\" (UniqueName: \"kubernetes.io/projected/9e375034-28f3-4050-8e6a-8d6edc3abe02-kube-api-access-5m8hp\") pod \"authentication-operator-69f744f599-tdmwg\" (UID: \"9e375034-28f3-4050-8e6a-8d6edc3abe02\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.078600 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e3f6ec2d-f94c-434b-8072-2beaee292fb6-node-bootstrap-token\") pod \"machine-config-server-fjsbq\" (UID: \"e3f6ec2d-f94c-434b-8072-2beaee292fb6\") " pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.090532 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0882051f-62cd-4cbd-a3b5-561072a04aeb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.112463 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtshk\" (UniqueName: \"kubernetes.io/projected/0882051f-62cd-4cbd-a3b5-561072a04aeb-kube-api-access-rtshk\") pod \"cluster-image-registry-operator-dc59b4c8b-gqp27\" (UID: \"0882051f-62cd-4cbd-a3b5-561072a04aeb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.121704 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.122411 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.123842 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.141650 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0aa005c4-b69f-4bb7-90e6-c2f58210f9d3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jq2nt\" (UID: \"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.150344 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.150753 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:35.650737664 +0000 UTC m=+142.637048679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.151176 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gv7z\" (UniqueName: \"kubernetes.io/projected/b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e-kube-api-access-4gv7z\") pod \"multus-admission-controller-857f4d67dd-x8d24\" (UID: \"b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.175777 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsrmt\" (UniqueName: \"kubernetes.io/projected/1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba-kube-api-access-wsrmt\") pod \"downloads-7954f5f757-mr6p4\" (UID: \"1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba\") " pod="openshift-console/downloads-7954f5f757-mr6p4" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.176735 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tv788"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.190894 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncj6m\" (UniqueName: \"kubernetes.io/projected/e1a884ca-524b-4ff7-b955-c9b207e2861d-kube-api-access-ncj6m\") pod \"openshift-controller-manager-operator-756b6f6bc6-wjngl\" (UID: \"e1a884ca-524b-4ff7-b955-c9b207e2861d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.233779 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38ce8782-9463-46a1-b5ea-73c6d3c01589-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pk2g\" (UID: \"38ce8782-9463-46a1-b5ea-73c6d3c01589\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:35 crc kubenswrapper[4747]: W1126 13:17:35.248570 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5734b55_b478_4525_b5da_88b63b4812d0.slice/crio-cba4076229b8bc66c328cfa08d0600b304469d0977fa39dcffca374bf9754112 WatchSource:0}: Error finding container cba4076229b8bc66c328cfa08d0600b304469d0977fa39dcffca374bf9754112: Status 404 returned error can't find the container with id cba4076229b8bc66c328cfa08d0600b304469d0977fa39dcffca374bf9754112 Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.250895 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frzfx\" (UniqueName: \"kubernetes.io/projected/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-kube-api-access-frzfx\") pod \"oauth-openshift-558db77b4-ksg5q\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.251928 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.252386 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:35.75236539 +0000 UTC m=+142.738676495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.267329 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.277038 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnx77\" (UniqueName: \"kubernetes.io/projected/aedc5169-b6a2-4ffe-8845-099fc5b6f9c1-kube-api-access-fnx77\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxqxh\" (UID: \"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.293846 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.297791 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qvvvm"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.299046 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r4nz4"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.301699 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.302637 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kznfx\" (UniqueName: \"kubernetes.io/projected/66ee9991-c7b9-4f4d-a995-6dcbce726841-kube-api-access-kznfx\") pod \"dns-default-nfwnt\" (UID: \"66ee9991-c7b9-4f4d-a995-6dcbce726841\") " pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.309395 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99bc3650-9d3c-4b40-bc41-cd10a68378e8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.317351 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.330850 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.330931 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4znjx\" (UniqueName: \"kubernetes.io/projected/78884d47-16c2-4ce8-8986-814701e6f244-kube-api-access-4znjx\") pod \"service-ca-9c57cc56f-rpwnp\" (UID: \"78884d47-16c2-4ce8-8986-814701e6f244\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:35 crc kubenswrapper[4747]: W1126 13:17:35.333065 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a7bfd7c_5b8c_4f76_95a7_4d67c0bbba6a.slice/crio-d4438eecb6650f749b6f08591cae77855dc721b60e79c4f1af92e9c03a67b09f WatchSource:0}: Error finding container d4438eecb6650f749b6f08591cae77855dc721b60e79c4f1af92e9c03a67b09f: Status 404 returned error can't find the container with id d4438eecb6650f749b6f08591cae77855dc721b60e79c4f1af92e9c03a67b09f Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.339516 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.348750 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcsqx\" (UniqueName: \"kubernetes.io/projected/aeb24804-3be2-46c5-b1a6-494b7b271aee-kube-api-access-tcsqx\") pod \"collect-profiles-29402715-vsvbk\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.353387 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.353741 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:35.853729218 +0000 UTC m=+142.840040233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.372762 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mr6p4" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.375334 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6x5t\" (UniqueName: \"kubernetes.io/projected/4395cb3b-b843-4c5b-8312-adcd0887d777-kube-api-access-v6x5t\") pod \"router-default-5444994796-ctbq5\" (UID: \"4395cb3b-b843-4c5b-8312-adcd0887d777\") " pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.380396 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.393348 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km8fl\" (UniqueName: \"kubernetes.io/projected/97e670ff-0ad9-4b6b-9017-91aa22702320-kube-api-access-km8fl\") pod \"packageserver-d55dfcdfc-2w4bp\" (UID: \"97e670ff-0ad9-4b6b-9017-91aa22702320\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.399157 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.407649 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.408532 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.413377 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.417487 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.425880 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsl9c\" (UniqueName: \"kubernetes.io/projected/e2da97ad-2e38-47fe-8223-dd5fac723e66-kube-api-access-xsl9c\") pod \"ingress-canary-vblgj\" (UID: \"e2da97ad-2e38-47fe-8223-dd5fac723e66\") " pod="openshift-ingress-canary/ingress-canary-vblgj" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.432371 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-967h8\" (UniqueName: \"kubernetes.io/projected/6cba76ac-e300-4f6f-b100-1546b5bbd85b-kube-api-access-967h8\") pod \"service-ca-operator-777779d784-gbnb2\" (UID: \"6cba76ac-e300-4f6f-b100-1546b5bbd85b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.440345 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.445878 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.454234 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl8md\" (UniqueName: \"kubernetes.io/projected/2ce131be-f5b6-4812-8e89-ed2702d6f47f-kube-api-access-vl8md\") pod \"package-server-manager-789f6589d5-qpslg\" (UID: \"2ce131be-f5b6-4812-8e89-ed2702d6f47f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.454891 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.455218 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:35.955207569 +0000 UTC m=+142.941518574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.473651 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvrs\" (UniqueName: \"kubernetes.io/projected/1fe80ce7-7278-4c19-b658-1ec3336280e8-kube-api-access-vlvrs\") pod \"csi-hostpathplugin-4ll78\" (UID: \"1fe80ce7-7278-4c19-b658-1ec3336280e8\") " pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.482182 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.497284 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.498371 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz9v7\" (UniqueName: \"kubernetes.io/projected/c5eb732b-9fb0-4c46-b550-dce7eebf78f5-kube-api-access-zz9v7\") pod \"machine-config-operator-74547568cd-fnj9w\" (UID: \"c5eb732b-9fb0-4c46-b550-dce7eebf78f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.505977 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.512367 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.515129 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zdvc\" (UniqueName: \"kubernetes.io/projected/f637a777-d9a6-44b1-a6fd-de227846cf5b-kube-api-access-2zdvc\") pod \"control-plane-machine-set-operator-78cbb6b69f-zw88v\" (UID: \"f637a777-d9a6-44b1-a6fd-de227846cf5b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.526637 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.539925 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wlsg\" (UniqueName: \"kubernetes.io/projected/1d11da9a-1232-421f-9b58-63cb5d519a0a-kube-api-access-5wlsg\") pod \"olm-operator-6b444d44fb-qrchs\" (UID: \"1d11da9a-1232-421f-9b58-63cb5d519a0a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.544611 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:35 crc kubenswrapper[4747]: W1126 13:17:35.551420 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ec07ab0_a220_458e_9fde_76c2f9f8cbd7.slice/crio-db71e6877858648368658d6357a2132de2de0f76c09a4f847a397e904ecaebd4 WatchSource:0}: Error finding container db71e6877858648368658d6357a2132de2de0f76c09a4f847a397e904ecaebd4: Status 404 returned error can't find the container with id db71e6877858648368658d6357a2132de2de0f76c09a4f847a397e904ecaebd4 Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.551780 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.571717 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.572226 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.072194101 +0000 UTC m=+143.058505116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.572480 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.572814 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.072797226 +0000 UTC m=+143.059108241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.582008 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt4kx\" (UniqueName: \"kubernetes.io/projected/3dbff5f0-a78d-4a15-9652-f01b1e882a42-kube-api-access-jt4kx\") pod \"migrator-59844c95c7-scvdh\" (UID: \"3dbff5f0-a78d-4a15-9652-f01b1e882a42\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.587464 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" event={"ID":"36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b","Type":"ContainerStarted","Data":"a804d1bfb86fba64e118dd51b6922d16848b0edce110bfd7fd258ff672dd731d"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.590801 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdmhh\" (UniqueName: \"kubernetes.io/projected/e3f6ec2d-f94c-434b-8072-2beaee292fb6-kube-api-access-rdmhh\") pod \"machine-config-server-fjsbq\" (UID: \"e3f6ec2d-f94c-434b-8072-2beaee292fb6\") " pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.591091 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" event={"ID":"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7","Type":"ContainerStarted","Data":"db71e6877858648368658d6357a2132de2de0f76c09a4f847a397e904ecaebd4"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.593309 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4ll78" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.594940 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn2gt\" (UniqueName: \"kubernetes.io/projected/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-kube-api-access-nn2gt\") pod \"marketplace-operator-79b997595-zqvkf\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.595117 4747 generic.go:334] "Generic (PLEG): container finished" podID="d1c32fed-c28d-42e8-9bfb-e67af83e8c0b" containerID="c97cac78a9d68bd14d51a89ab9cbd0e912ff458c8a8bb6966b4ffd410e5517a2" exitCode=0 Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.595328 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" event={"ID":"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b","Type":"ContainerDied","Data":"c97cac78a9d68bd14d51a89ab9cbd0e912ff458c8a8bb6966b4ffd410e5517a2"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.595390 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" event={"ID":"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b","Type":"ContainerStarted","Data":"059fac4f9b93f6d6cbe442e0019def0f99652125d8a945f7307f8f452b774793"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.604228 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fjsbq" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.614805 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzwxr\" (UniqueName: \"kubernetes.io/projected/99bc3650-9d3c-4b40-bc41-cd10a68378e8-kube-api-access-bzwxr\") pod \"ingress-operator-5b745b69d9-qhgpx\" (UID: \"99bc3650-9d3c-4b40-bc41-cd10a68378e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.631204 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vblgj" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.639335 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" event={"ID":"ebe9d962-0dd1-49af-a010-5f92d0bcad9f","Type":"ContainerStarted","Data":"3f107a772cee669d419228a82982fb4fb58658154b163742459c1bf293b7b95f"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.639401 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" event={"ID":"ebe9d962-0dd1-49af-a010-5f92d0bcad9f","Type":"ContainerStarted","Data":"ac5664ea62688a76b32d511bdee673a1b58959bc03e226ab742381acebb2a15c"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.641798 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" event={"ID":"1e57bddb-aeda-470b-a095-ce0d84023e77","Type":"ContainerStarted","Data":"30e2a3aa150e8e918c1fab272d6b76bd171cbbacae9c20e6adb9ccf499600727"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.641845 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" event={"ID":"1e57bddb-aeda-470b-a095-ce0d84023e77","Type":"ContainerStarted","Data":"0c3a7d61c47a42e914fee1732c444467e3c088988a33e21a5ac7937ebf0d454b"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.643141 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" event={"ID":"1a8d06af-468e-43a4-992f-b115fd13649e","Type":"ContainerStarted","Data":"2c35d09231b2f647e5ae28552d24c9b393977c98c0a67572a39df59f4699102a"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.646526 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qvvvm" event={"ID":"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a","Type":"ContainerStarted","Data":"d4438eecb6650f749b6f08591cae77855dc721b60e79c4f1af92e9c03a67b09f"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.652838 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" event={"ID":"480f682a-63f7-4ef6-b10c-29c34222269b","Type":"ContainerStarted","Data":"8a15055f99d3ea001ea27210c15cb2b25c178bb501addf0249a860f99b998b43"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.652894 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" event={"ID":"480f682a-63f7-4ef6-b10c-29c34222269b","Type":"ContainerStarted","Data":"a420314f6d75d47bf1a8b8a901829275e0684f028eacc912ff8d5f409d3d7ae5"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.652908 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" event={"ID":"480f682a-63f7-4ef6-b10c-29c34222269b","Type":"ContainerStarted","Data":"3b2cf5a77c26e614f4f39d8073e017b640e9c4b0612f28e7ed54f20e3aac4f15"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.657122 4747 generic.go:334] "Generic (PLEG): container finished" podID="39864b7b-d0d0-4cdc-992d-6045872983cb" containerID="7e21784c78a550405fd09074ec3c115d3aacdf940feda8cb23ebc7ec2394b2f8" exitCode=0 Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.657193 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" event={"ID":"39864b7b-d0d0-4cdc-992d-6045872983cb","Type":"ContainerDied","Data":"7e21784c78a550405fd09074ec3c115d3aacdf940feda8cb23ebc7ec2394b2f8"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.657216 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" event={"ID":"39864b7b-d0d0-4cdc-992d-6045872983cb","Type":"ContainerStarted","Data":"2dc996d453d39b6ee9a7b90dbae450ebc665383365c47b9124b3ce0e2ae88726"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.662525 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" event={"ID":"82daa056-c08b-4c56-817b-850b31cd016e","Type":"ContainerStarted","Data":"3ac538765e121b042b2eb3bf0038772038a48cea9c97da6d63a9911cd5d8ffb3"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.673619 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.673937 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.173920288 +0000 UTC m=+143.160231303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.676179 4747 generic.go:334] "Generic (PLEG): container finished" podID="774857d6-50c3-4ada-96ab-430dbeff8b0f" containerID="2caa3cd78e952b7ff922a549f5b4d70ba1f1cfa689ab92e4382c87431cf467e9" exitCode=0 Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.676256 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" event={"ID":"774857d6-50c3-4ada-96ab-430dbeff8b0f","Type":"ContainerDied","Data":"2caa3cd78e952b7ff922a549f5b4d70ba1f1cfa689ab92e4382c87431cf467e9"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.676282 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" event={"ID":"774857d6-50c3-4ada-96ab-430dbeff8b0f","Type":"ContainerStarted","Data":"a90fc3499ac38c668a2675581aa214ed998470c7cb3483911cdd902c1b264796"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.678353 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" event={"ID":"eae28a17-d4a1-4b7b-8aed-9588514fd6e6","Type":"ContainerStarted","Data":"452ce6f4a1b49a31551fb056fba7f40992e623a872fd77a5b6bbc6e099711de1"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.679640 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" event={"ID":"fc6037b8-8286-4388-b575-81d0d1f698c8","Type":"ContainerStarted","Data":"bb05dc4a37308cd789fd71cdf5240f2c2470e99dae0d6707d221037c0724ef4e"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.681653 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tv788" event={"ID":"c5734b55-b478-4525-b5da-88b63b4812d0","Type":"ContainerStarted","Data":"fe2c1ca3335d1e3aeec196924787d94258aff2279976e3dd6052bac7c365d0b4"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.681689 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tv788" event={"ID":"c5734b55-b478-4525-b5da-88b63b4812d0","Type":"ContainerStarted","Data":"cba4076229b8bc66c328cfa08d0600b304469d0977fa39dcffca374bf9754112"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.683047 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" event={"ID":"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7","Type":"ContainerStarted","Data":"6ec21d65d617bb08263afb61cf677f54d30f6f5b38b0ab85b3d0b11e2c477bd2"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.683083 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" event={"ID":"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7","Type":"ContainerStarted","Data":"707bf8dbb366227b54900ddd90d9abc160b96a29a0f5f2704322a1595328a89e"} Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.683287 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.684651 4747 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pwnxf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.684677 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" podUID="071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.760653 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.770161 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.770787 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mr6p4"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.774836 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.775219 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.777193 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.277174884 +0000 UTC m=+143.263485899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.790422 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.795666 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tdmwg"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.819543 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.834648 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.858803 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nfwnt"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.870143 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.876687 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.876860 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.376839169 +0000 UTC m=+143.363150184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.877039 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.877723 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.37769687 +0000 UTC m=+143.364007875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: W1126 13:17:35.886150 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f6ec2d_f94c_434b_8072_2beaee292fb6.slice/crio-68daf1c6805b6e1d4c690f6b979a086f9dd088c227917e98234c7bbd2415eedb WatchSource:0}: Error finding container 68daf1c6805b6e1d4c690f6b979a086f9dd088c227917e98234c7bbd2415eedb: Status 404 returned error can't find the container with id 68daf1c6805b6e1d4c690f6b979a086f9dd088c227917e98234c7bbd2415eedb Nov 26 13:17:35 crc kubenswrapper[4747]: W1126 13:17:35.890449 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0882051f_62cd_4cbd_a3b5_561072a04aeb.slice/crio-43fbb66a99e4268170e85b79dc1cf96cbd9aa3451ca5c573bb8cd0957f7ffb42 WatchSource:0}: Error finding container 43fbb66a99e4268170e85b79dc1cf96cbd9aa3451ca5c573bb8cd0957f7ffb42: Status 404 returned error can't find the container with id 43fbb66a99e4268170e85b79dc1cf96cbd9aa3451ca5c573bb8cd0957f7ffb42 Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.973884 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.973935 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6zcnr"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.973945 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ksg5q"] Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.980654 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:35 crc kubenswrapper[4747]: E1126 13:17:35.980999 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.480982837 +0000 UTC m=+143.467293852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:35 crc kubenswrapper[4747]: I1126 13:17:35.984583 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg"] Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.044173 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g"] Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.083765 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:36 crc kubenswrapper[4747]: E1126 13:17:36.084214 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.584199812 +0000 UTC m=+143.570510827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.093224 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk"] Nov 26 13:17:36 crc kubenswrapper[4747]: W1126 13:17:36.127865 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1a884ca_524b_4ff7_b955_c9b207e2861d.slice/crio-f8ed154db0a71baf005aa87aa2a97075a3a7c4270eca698b4fa63b3df079e991 WatchSource:0}: Error finding container f8ed154db0a71baf005aa87aa2a97075a3a7c4270eca698b4fa63b3df079e991: Status 404 returned error can't find the container with id f8ed154db0a71baf005aa87aa2a97075a3a7c4270eca698b4fa63b3df079e991 Nov 26 13:17:36 crc kubenswrapper[4747]: W1126 13:17:36.128961 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod616d1d14_e2cc_473f_98d9_9ed776ebfd4e.slice/crio-e6e745e8b33e9b64d548afe100601c376b9effe379f96f007aa3b6c327276fc8 WatchSource:0}: Error finding container e6e745e8b33e9b64d548afe100601c376b9effe379f96f007aa3b6c327276fc8: Status 404 returned error can't find the container with id e6e745e8b33e9b64d548afe100601c376b9effe379f96f007aa3b6c327276fc8 Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.184430 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:36 crc kubenswrapper[4747]: E1126 13:17:36.184798 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.684781721 +0000 UTC m=+143.671092736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.214248 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt"] Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.281657 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x8d24"] Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.291871 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:36 crc kubenswrapper[4747]: E1126 13:17:36.293739 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.79371376 +0000 UTC m=+143.780024775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.397681 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:36 crc kubenswrapper[4747]: E1126 13:17:36.398003 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.897986912 +0000 UTC m=+143.884297927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.499704 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:36 crc kubenswrapper[4747]: E1126 13:17:36.499986 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:36.999974266 +0000 UTC m=+143.986285281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.600320 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:36 crc kubenswrapper[4747]: E1126 13:17:36.601155 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:37.101140379 +0000 UTC m=+144.087451394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.704986 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:36 crc kubenswrapper[4747]: E1126 13:17:36.705300 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:37.205288848 +0000 UTC m=+144.191599853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.791544 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" event={"ID":"1e57bddb-aeda-470b-a095-ce0d84023e77","Type":"ContainerStarted","Data":"0aa95cb216417c17269e0505b64dab3599c6c899a1a91a9fbc906af27839fbed"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.796866 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nfwnt" event={"ID":"66ee9991-c7b9-4f4d-a995-6dcbce726841","Type":"ContainerStarted","Data":"e1c5a1a11f5a78875e6cf4965402754ac818d40a7c04edd296e03f268a42bacb"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.806459 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:36 crc kubenswrapper[4747]: E1126 13:17:36.806777 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:37.306762539 +0000 UTC m=+144.293073554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.813598 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" event={"ID":"fc6037b8-8286-4388-b575-81d0d1f698c8","Type":"ContainerStarted","Data":"c537088f97cbc6f9f5e6b0aba10367d048223b813af8e1e667f6adfd7c74eaf4"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.817391 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" event={"ID":"82daa056-c08b-4c56-817b-850b31cd016e","Type":"ContainerStarted","Data":"64fd1f6a3909308b935a6465a2f05853c3b8e7ecbfad2d5b32d9abc047a036f5"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.822209 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.853894 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" event={"ID":"38ce8782-9463-46a1-b5ea-73c6d3c01589","Type":"ContainerStarted","Data":"c5855d57576d458b09d03c00986dbbe40494d41dd6f4ddf50e21be4017236fb6"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.857486 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh"] Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.860659 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" event={"ID":"9e375034-28f3-4050-8e6a-8d6edc3abe02","Type":"ContainerStarted","Data":"2b38a9c8e04657b505adaa2f23c2064c9493caa5f59ab3163d3a03ebd23d0937"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.860767 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rpwnp"] Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.862012 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" event={"ID":"616d1d14-e2cc-473f-98d9-9ed776ebfd4e","Type":"ContainerStarted","Data":"e6e745e8b33e9b64d548afe100601c376b9effe379f96f007aa3b6c327276fc8"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.869596 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp"] Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.883192 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" event={"ID":"0882051f-62cd-4cbd-a3b5-561072a04aeb","Type":"ContainerStarted","Data":"43fbb66a99e4268170e85b79dc1cf96cbd9aa3451ca5c573bb8cd0957f7ffb42"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.910230 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:36 crc kubenswrapper[4747]: E1126 13:17:36.910592 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:37.410576629 +0000 UTC m=+144.396887644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.916731 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fjsbq" event={"ID":"e3f6ec2d-f94c-434b-8072-2beaee292fb6","Type":"ContainerStarted","Data":"68daf1c6805b6e1d4c690f6b979a086f9dd088c227917e98234c7bbd2415eedb"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.923697 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" event={"ID":"1a8d06af-468e-43a4-992f-b115fd13649e","Type":"ContainerStarted","Data":"b5f82f0143ab9f8f2c0c0c56d7b6d35032401b42d811b93e7958117ca637e497"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.926830 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qvvvm" event={"ID":"5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a","Type":"ContainerStarted","Data":"ccd803373355f79ac684543b803d0a34139e1fc3c9e118a6954dd12b92470409"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.927192 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.930010 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" event={"ID":"ec6afd64-e5b6-4851-a35e-db5a9490cdcb","Type":"ContainerStarted","Data":"611f9ce6173846163f6b3af384bfe092c0ca786d3da660ecb01d983ddb114a34"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.938843 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ctbq5" event={"ID":"4395cb3b-b843-4c5b-8312-adcd0887d777","Type":"ContainerStarted","Data":"8cc05bcf03b79ad5327c4882ce1f6ede14774fe52ddd188cdf7336f34d598ac1"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.946740 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" event={"ID":"e1a884ca-524b-4ff7-b955-c9b207e2861d","Type":"ContainerStarted","Data":"f8ed154db0a71baf005aa87aa2a97075a3a7c4270eca698b4fa63b3df079e991"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.953648 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" event={"ID":"aeb24804-3be2-46c5-b1a6-494b7b271aee","Type":"ContainerStarted","Data":"5e029d72e3bae615a62be6123e483d63204d7efde42a54b39a134b362409bff6"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.960778 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mr6p4" event={"ID":"1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba","Type":"ContainerStarted","Data":"5b4f145e0f079ccea0ce1628eb4ebc5d33339cdedbd867a98885e11e31412f61"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.969139 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" event={"ID":"36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b","Type":"ContainerStarted","Data":"116e5aa1d36f1b7e29dd071696ef94c3597a23e9c5f0b3bfcd9d8e0cd66c9c08"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.980134 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" event={"ID":"2ce131be-f5b6-4812-8e89-ed2702d6f47f","Type":"ContainerStarted","Data":"a91214e32b16cc7373fa65c50b39e96b5f74bd6f724b50556db3faf0bbd778e9"} Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.990028 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.990211 4747 patch_prober.go:28] interesting pod/console-operator-58897d9998-qvvvm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.990240 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qvvvm" podUID="5a7bfd7c-5b8c-4f76-95a7-4d67c0bbba6a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.991580 4747 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-sgvf9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 26 13:17:36 crc kubenswrapper[4747]: I1126 13:17:36.991607 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" podUID="82daa056-c08b-4c56-817b-850b31cd016e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.011072 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.012175 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:37.512161143 +0000 UTC m=+144.498472158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.119765 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.124419 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:37.624399085 +0000 UTC m=+144.610710100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: W1126 13:17:37.139728 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaedc5169_b6a2_4ffe_8845_099fc5b6f9c1.slice/crio-be6e2429cd5a5978aafd4864bcc458d76d2f03ec134ad2371e3cf620597e3229 WatchSource:0}: Error finding container be6e2429cd5a5978aafd4864bcc458d76d2f03ec134ad2371e3cf620597e3229: Status 404 returned error can't find the container with id be6e2429cd5a5978aafd4864bcc458d76d2f03ec134ad2371e3cf620597e3229 Nov 26 13:17:37 crc kubenswrapper[4747]: W1126 13:17:37.170219 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97e670ff_0ad9_4b6b_9017_91aa22702320.slice/crio-3f297661a6047aabe3f06c9370056ce4de35084c1632062c8e4e74e47794f073 WatchSource:0}: Error finding container 3f297661a6047aabe3f06c9370056ce4de35084c1632062c8e4e74e47794f073: Status 404 returned error can't find the container with id 3f297661a6047aabe3f06c9370056ce4de35084c1632062c8e4e74e47794f073 Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.171254 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qvvvm" podStartSLOduration=123.171201392 podStartE2EDuration="2m3.171201392s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:37.170791891 +0000 UTC m=+144.157102906" watchObservedRunningTime="2025-11-26 13:17:37.171201392 +0000 UTC m=+144.157512407" Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.189193 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tv788" podStartSLOduration=123.189175224 podStartE2EDuration="2m3.189175224s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:37.187274106 +0000 UTC m=+144.173585131" watchObservedRunningTime="2025-11-26 13:17:37.189175224 +0000 UTC m=+144.175486239" Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.224651 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2"] Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.231002 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.231437 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:37.731411646 +0000 UTC m=+144.717722661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.249704 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-pdp9j" podStartSLOduration=123.249675935 podStartE2EDuration="2m3.249675935s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:37.241374946 +0000 UTC m=+144.227685961" watchObservedRunningTime="2025-11-26 13:17:37.249675935 +0000 UTC m=+144.235986950" Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.278638 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jcxz6" podStartSLOduration=123.278616553 podStartE2EDuration="2m3.278616553s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:37.276758406 +0000 UTC m=+144.263069421" watchObservedRunningTime="2025-11-26 13:17:37.278616553 +0000 UTC m=+144.264927568" Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.332028 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.339342 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:37.839325549 +0000 UTC m=+144.825636564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.358334 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" podStartSLOduration=122.358308216 podStartE2EDuration="2m2.358308216s" podCreationTimestamp="2025-11-26 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:37.357963188 +0000 UTC m=+144.344274203" watchObservedRunningTime="2025-11-26 13:17:37.358308216 +0000 UTC m=+144.344619231" Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.359683 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ff9mx" podStartSLOduration=123.359677661 podStartE2EDuration="2m3.359677661s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:37.321120411 +0000 UTC m=+144.307431426" watchObservedRunningTime="2025-11-26 13:17:37.359677661 +0000 UTC m=+144.345988676" Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.395895 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" podStartSLOduration=123.395877341 podStartE2EDuration="2m3.395877341s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:37.394522967 +0000 UTC m=+144.380833982" watchObservedRunningTime="2025-11-26 13:17:37.395877341 +0000 UTC m=+144.382188356" Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.434471 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.434819 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:37.934801739 +0000 UTC m=+144.921112754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.484733 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvkb7" podStartSLOduration=123.484713424 podStartE2EDuration="2m3.484713424s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:37.435091137 +0000 UTC m=+144.421402152" watchObservedRunningTime="2025-11-26 13:17:37.484713424 +0000 UTC m=+144.471024439" Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.512179 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh"] Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.542923 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.543522 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.043504842 +0000 UTC m=+145.029815857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.594459 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4ll78"] Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.605817 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqvkf"] Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.655079 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.655568 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.155542649 +0000 UTC m=+145.141853664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.691673 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx"] Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.765179 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vblgj"] Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.766953 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.767550 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.267527955 +0000 UTC m=+145.253838970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.780489 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v"] Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.869763 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.870272 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.370214347 +0000 UTC m=+145.356525352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.870667 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.871472 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.371448778 +0000 UTC m=+145.357759793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.876311 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w"] Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.876344 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs"] Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.971810 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.972719 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.472687103 +0000 UTC m=+145.458998118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:37 crc kubenswrapper[4747]: I1126 13:17:37.975279 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:37 crc kubenswrapper[4747]: E1126 13:17:37.975821 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.475803481 +0000 UTC m=+145.462114496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.042498 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" event={"ID":"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1","Type":"ContainerStarted","Data":"be6e2429cd5a5978aafd4864bcc458d76d2f03ec134ad2371e3cf620597e3229"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.062810 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" event={"ID":"b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e","Type":"ContainerStarted","Data":"341386e2d1fb4ef5e5d4aa53d7527be0d5376212550c2b57afa53b0a04d21e1a"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.076362 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.076670 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.576654217 +0000 UTC m=+145.562965232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.098447 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" event={"ID":"1a8d06af-468e-43a4-992f-b115fd13649e","Type":"ContainerStarted","Data":"e5612ff67354b1fdab46d1906e3efc93c597720f94d3096c00855a0c1791b907"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.120392 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" event={"ID":"36a8ba3a-2269-4af0-9ec6-fc3f24dfba1b","Type":"ContainerStarted","Data":"e79643545a5d44271e97a4c4da7154b201ccc8d125be089ed3093beacd306522"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.124985 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" event={"ID":"97e670ff-0ad9-4b6b-9017-91aa22702320","Type":"ContainerStarted","Data":"3f297661a6047aabe3f06c9370056ce4de35084c1632062c8e4e74e47794f073"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.147476 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" event={"ID":"eae28a17-d4a1-4b7b-8aed-9588514fd6e6","Type":"ContainerStarted","Data":"435bb7de0fe8ccc6864fb55d14de6e8ebf2fe68b36e906a00067e35fd8d57661"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.148384 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.152187 4747 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cq6xq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.152230 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" podUID="eae28a17-d4a1-4b7b-8aed-9588514fd6e6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.167330 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4ll78" event={"ID":"1fe80ce7-7278-4c19-b658-1ec3336280e8","Type":"ContainerStarted","Data":"16a809a1975f35b0c3a645fe51458092022a0a3cde29e84107f8804bcde6bf5b"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.184781 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.185119 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.685097054 +0000 UTC m=+145.671408069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.185330 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mr6p4" event={"ID":"1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba","Type":"ContainerStarted","Data":"4d61df0fd7ddfa3f42f8634e3e2a7a255c9dc26d26da7caafec0d9012a40fa6b"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.186067 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mr6p4" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.191108 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ctbq5" event={"ID":"4395cb3b-b843-4c5b-8312-adcd0887d777","Type":"ContainerStarted","Data":"e3cc921b6256442e52ba38a4e8bf3be29bfc7305eb8e5c5fea065477483eea06"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.197029 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-r4nz4" podStartSLOduration=124.197002393 podStartE2EDuration="2m4.197002393s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.187182236 +0000 UTC m=+145.173493251" watchObservedRunningTime="2025-11-26 13:17:38.197002393 +0000 UTC m=+145.183313428" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.197565 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8fbql" podStartSLOduration=124.197559487 podStartE2EDuration="2m4.197559487s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.148475823 +0000 UTC m=+145.134786838" watchObservedRunningTime="2025-11-26 13:17:38.197559487 +0000 UTC m=+145.183870502" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.198203 4747 patch_prober.go:28] interesting pod/downloads-7954f5f757-mr6p4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.198250 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mr6p4" podUID="1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.213436 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" event={"ID":"2ce131be-f5b6-4812-8e89-ed2702d6f47f","Type":"ContainerStarted","Data":"8474abe8c7592e2a7274eec6263577b4044b60375b6861a84a76f2e3f92d768b"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.233764 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" event={"ID":"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3","Type":"ContainerStarted","Data":"09eec0a6bbfecea0e5b3770271574f52de37f72f62f2f7392f46bd6b96b4400c"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.234566 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" event={"ID":"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d","Type":"ContainerStarted","Data":"0e42a246d604012be885003ca14b4435fb6f61157dd55ce238873d55910ff871"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.245732 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" event={"ID":"6ec07ab0-a220-458e-9fde-76c2f9f8cbd7","Type":"ContainerStarted","Data":"8c2e670ccb2fc04a91ce6a5282243cdb2213019aad55f054425950040a411abf"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.262315 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" event={"ID":"99bc3650-9d3c-4b40-bc41-cd10a68378e8","Type":"ContainerStarted","Data":"fe1bb03ae3815a92619f5a8002d5efe9f5a2867c9e2bfbc6acb2c6267c0453fa"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.273640 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" event={"ID":"6cba76ac-e300-4f6f-b100-1546b5bbd85b","Type":"ContainerStarted","Data":"c148dfb07a7f5a15135b6075ec4442d2a59b0ab771da9d0681c159919f2d7253"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.280032 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" event={"ID":"aeb24804-3be2-46c5-b1a6-494b7b271aee","Type":"ContainerStarted","Data":"a14f4c82f14ff7623f0976f6c254fc187b344e4a8c0a6f9b6b404c22944aba9e"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.287724 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.288793 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" podStartSLOduration=123.28877844 podStartE2EDuration="2m3.28877844s" podCreationTimestamp="2025-11-26 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.257441672 +0000 UTC m=+145.243752687" watchObservedRunningTime="2025-11-26 13:17:38.28877844 +0000 UTC m=+145.275089455" Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.289176 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.78916068 +0000 UTC m=+145.775471695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.291490 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" event={"ID":"9e375034-28f3-4050-8e6a-8d6edc3abe02","Type":"ContainerStarted","Data":"c96359f274cf595db3fd6c99f2cad5fc08d4810beae91f22ff0a4a174d45d029"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.305032 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6k6lm" podStartSLOduration=124.305014428 podStartE2EDuration="2m4.305014428s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.304385333 +0000 UTC m=+145.290696338" watchObservedRunningTime="2025-11-26 13:17:38.305014428 +0000 UTC m=+145.291325443" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.305147 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ctbq5" podStartSLOduration=124.305142872 podStartE2EDuration="2m4.305142872s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.289935179 +0000 UTC m=+145.276246184" watchObservedRunningTime="2025-11-26 13:17:38.305142872 +0000 UTC m=+145.291453887" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.312610 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" event={"ID":"0882051f-62cd-4cbd-a3b5-561072a04aeb","Type":"ContainerStarted","Data":"d5ceb9267d21955875e88e12c4fc10e4eff6df1d611761dd1a5d9a7589a90d34"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.331177 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mr6p4" podStartSLOduration=124.331163746 podStartE2EDuration="2m4.331163746s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.330712365 +0000 UTC m=+145.317023380" watchObservedRunningTime="2025-11-26 13:17:38.331163746 +0000 UTC m=+145.317474761" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.349363 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" event={"ID":"774857d6-50c3-4ada-96ab-430dbeff8b0f","Type":"ContainerStarted","Data":"356b5e49c7a3816c3dd21a7fc3a4039478aaa1e02ea1cc1c1c8788f1cfa39a67"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.372449 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" podStartSLOduration=124.372369092 podStartE2EDuration="2m4.372369092s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.370443614 +0000 UTC m=+145.356754639" watchObservedRunningTime="2025-11-26 13:17:38.372369092 +0000 UTC m=+145.358680107" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.387533 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" event={"ID":"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b","Type":"ContainerStarted","Data":"3a886ef03fe1a7c2ef89c3db74e3dadd0c985e61fe34619fe4439339d564b078"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.389598 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.391697 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.891677217 +0000 UTC m=+145.877988232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.394429 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" event={"ID":"f637a777-d9a6-44b1-a6fd-de227846cf5b","Type":"ContainerStarted","Data":"e5b5c8a69fc9275636f272ba04a8685f32351c6e077aebc7bc175ea493df46af"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.395441 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" event={"ID":"1d11da9a-1232-421f-9b58-63cb5d519a0a","Type":"ContainerStarted","Data":"96990e671e3029fc819874f78f18eb0979f2d593d32dd45e57eb9e717c6e494c"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.402495 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vblgj" event={"ID":"e2da97ad-2e38-47fe-8223-dd5fac723e66","Type":"ContainerStarted","Data":"46d9f7e57717497a373b06078358c3a21adc9a3134ebb238b8281d950316df2f"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.412619 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gqp27" podStartSLOduration=124.412598163 podStartE2EDuration="2m4.412598163s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.411597868 +0000 UTC m=+145.397908883" watchObservedRunningTime="2025-11-26 13:17:38.412598163 +0000 UTC m=+145.398909178" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.418380 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" event={"ID":"e1a884ca-524b-4ff7-b955-c9b207e2861d","Type":"ContainerStarted","Data":"87fd30ac460846e704b3bb18b7330d69f6b774df12748dfc1a36b8c74e6ee2ff"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.427335 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh" event={"ID":"3dbff5f0-a78d-4a15-9652-f01b1e882a42","Type":"ContainerStarted","Data":"cf25ef884e074c4254890f3bfe328ce74dc6899376fbe2d985255c4917c2a3b8"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.439874 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" event={"ID":"39864b7b-d0d0-4cdc-992d-6045872983cb","Type":"ContainerStarted","Data":"432dbf948612d1df62b2416913a9e189e07d8cae98fcad7aa09844744446d3f1"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.440128 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.454622 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tdmwg" podStartSLOduration=124.454597239 podStartE2EDuration="2m4.454597239s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.439342436 +0000 UTC m=+145.425653451" watchObservedRunningTime="2025-11-26 13:17:38.454597239 +0000 UTC m=+145.440908254" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.457322 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" event={"ID":"78884d47-16c2-4ce8-8986-814701e6f244","Type":"ContainerStarted","Data":"5e2f081d72e6971feb682a0bf6c7c45487e75727b1f0d7d9164cefcc096e885d"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.493038 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.493367 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.993340123 +0000 UTC m=+145.979651138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.493563 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.495653 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:38.995640781 +0000 UTC m=+145.981951796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.500622 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fjsbq" event={"ID":"e3f6ec2d-f94c-434b-8072-2beaee292fb6","Type":"ContainerStarted","Data":"6064d318ebd1075c66b5eed98df3ec84a218d5a1a809d60ebd718d669674bd5a"} Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.518904 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.526725 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wjngl" podStartSLOduration=124.526693362 podStartE2EDuration="2m4.526693362s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.466333164 +0000 UTC m=+145.452644179" watchObservedRunningTime="2025-11-26 13:17:38.526693362 +0000 UTC m=+145.513004377" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.527369 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" podStartSLOduration=123.527361119 podStartE2EDuration="2m3.527361119s" podCreationTimestamp="2025-11-26 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.525232045 +0000 UTC m=+145.511543070" watchObservedRunningTime="2025-11-26 13:17:38.527361119 +0000 UTC m=+145.513672134" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.529447 4747 patch_prober.go:28] interesting pod/router-default-5444994796-ctbq5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:17:38 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Nov 26 13:17:38 crc kubenswrapper[4747]: [+]process-running ok Nov 26 13:17:38 crc kubenswrapper[4747]: healthz check failed Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.529501 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ctbq5" podUID="4395cb3b-b843-4c5b-8312-adcd0887d777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.529694 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.552690 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qvvvm" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.585145 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" podStartSLOduration=124.585129711 podStartE2EDuration="2m4.585129711s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.58469416 +0000 UTC m=+145.571005175" watchObservedRunningTime="2025-11-26 13:17:38.585129711 +0000 UTC m=+145.571440726" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.599619 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.599846 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:39.09980029 +0000 UTC m=+146.086111305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.600022 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.603469 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:39.103431161 +0000 UTC m=+146.089742166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.633761 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" podStartSLOduration=123.633743193 podStartE2EDuration="2m3.633743193s" podCreationTimestamp="2025-11-26 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.632196545 +0000 UTC m=+145.618507560" watchObservedRunningTime="2025-11-26 13:17:38.633743193 +0000 UTC m=+145.620054198" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.716320 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.716588 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:39.216572626 +0000 UTC m=+146.202883641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.717316 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" podStartSLOduration=124.717300114 podStartE2EDuration="2m4.717300114s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.715712574 +0000 UTC m=+145.702023589" watchObservedRunningTime="2025-11-26 13:17:38.717300114 +0000 UTC m=+145.703611129" Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.824093 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.824843 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:39.324827348 +0000 UTC m=+146.311138363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:38 crc kubenswrapper[4747]: I1126 13:17:38.925442 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:38 crc kubenswrapper[4747]: E1126 13:17:38.925837 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:39.425822017 +0000 UTC m=+146.412133032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.026996 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.027609 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:39.527589786 +0000 UTC m=+146.513900801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.060718 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.127730 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.128169 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:39.628149934 +0000 UTC m=+146.614460949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.130324 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fjsbq" podStartSLOduration=7.130312328 podStartE2EDuration="7.130312328s" podCreationTimestamp="2025-11-26 13:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:38.80065685 +0000 UTC m=+145.786967865" watchObservedRunningTime="2025-11-26 13:17:39.130312328 +0000 UTC m=+146.116623343" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.231099 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.232840 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:39.732827556 +0000 UTC m=+146.719138571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.334425 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.335081 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:39.835064076 +0000 UTC m=+146.821375091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.436745 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.437077 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:39.93706565 +0000 UTC m=+146.923376665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.498860 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.498911 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.509070 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" event={"ID":"aedc5169-b6a2-4ffe-8845-099fc5b6f9c1","Type":"ContainerStarted","Data":"a8815e3b2e7d0b5485b7e24c3f2b3a7427fbd40d9de1a0914a2acee9771e8ad8"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.512926 4747 patch_prober.go:28] interesting pod/router-default-5444994796-ctbq5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:17:39 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Nov 26 13:17:39 crc kubenswrapper[4747]: [+]process-running ok Nov 26 13:17:39 crc kubenswrapper[4747]: healthz check failed Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.512976 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ctbq5" podUID="4395cb3b-b843-4c5b-8312-adcd0887d777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.516045 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" event={"ID":"6cba76ac-e300-4f6f-b100-1546b5bbd85b","Type":"ContainerStarted","Data":"dbda3a19755caf65bf94e5da4f8ec6b623e053431bb2043601b44c3c89320420"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.522750 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" event={"ID":"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d","Type":"ContainerStarted","Data":"6f5be912877d3cc528e6d6d5458005076af34a582e9a9a4d7732b865158dd190"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.523436 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.525336 4747 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zqvkf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.525384 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" podUID="30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.525728 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4ll78" event={"ID":"1fe80ce7-7278-4c19-b658-1ec3336280e8","Type":"ContainerStarted","Data":"454ee67fec31c7a1a6e8738a83522a439dcc5ab749e66579e301c1376015218a"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.532213 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" event={"ID":"1d11da9a-1232-421f-9b58-63cb5d519a0a","Type":"ContainerStarted","Data":"3280b28c2aead7b55c1aab3d50476f1f50b5d80cb7907d18a3ca42f62208b920"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.532458 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.534032 4747 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qrchs container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.534105 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" podUID="1d11da9a-1232-421f-9b58-63cb5d519a0a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.534713 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" event={"ID":"b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e","Type":"ContainerStarted","Data":"cfca6e4bec95649228cca35e3bda958e08cb0b6b6f0ba8490dd6ce88e8f3a735"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.534746 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" event={"ID":"b7e80cd2-2fc8-4304-8658-e2fa7ea1d52e","Type":"ContainerStarted","Data":"9891beb4c531bfab89d59528b0c5734d213433f3a91f18b3e6a262dc9e4cc012"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.537444 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.537853 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.037835413 +0000 UTC m=+147.024146428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.541023 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxqxh" podStartSLOduration=125.540997813 podStartE2EDuration="2m5.540997813s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:39.537651409 +0000 UTC m=+146.523962424" watchObservedRunningTime="2025-11-26 13:17:39.540997813 +0000 UTC m=+146.527308828" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.544755 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" event={"ID":"0aa005c4-b69f-4bb7-90e6-c2f58210f9d3","Type":"ContainerStarted","Data":"0f3c9c39e8df1cf4df79e5189d7f938a544d748eec0b37566e60c733c742535d"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.547319 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nfwnt" event={"ID":"66ee9991-c7b9-4f4d-a995-6dcbce726841","Type":"ContainerStarted","Data":"88a99b15a5d6efc64b0c5cb8698cdfccbf17dedc1ffbd869cfd6ea72fb75be10"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.547364 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nfwnt" event={"ID":"66ee9991-c7b9-4f4d-a995-6dcbce726841","Type":"ContainerStarted","Data":"b3563cd4b537d79f3085b56d7625ec04dbc5c2383e5a493626a32a3dd9cd7e7f"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.547771 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.554443 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" event={"ID":"616d1d14-e2cc-473f-98d9-9ed776ebfd4e","Type":"ContainerStarted","Data":"2bb2eb1898a46c203f5223274b8e94a19f62bcde32979aed89f103f0ee8c927c"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.556271 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" event={"ID":"99bc3650-9d3c-4b40-bc41-cd10a68378e8","Type":"ContainerStarted","Data":"b13af8710c24faa02b83e06dfd5d191cd2a38df15b9eae0c8c09097b2bdf7fb0"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.556300 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" event={"ID":"99bc3650-9d3c-4b40-bc41-cd10a68378e8","Type":"ContainerStarted","Data":"703a69fc02befd2bfcb729c02a23c9a7bf0bd017e811f0feb9e6f4de84bc8424"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.583232 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" event={"ID":"f637a777-d9a6-44b1-a6fd-de227846cf5b","Type":"ContainerStarted","Data":"0e2465b6eecd28d740b740db612f47ce47acb005a7167cc037c7a17debfe7827"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.606536 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.606779 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.621222 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" event={"ID":"c5eb732b-9fb0-4c46-b550-dce7eebf78f5","Type":"ContainerStarted","Data":"14b9320769891c0d687e23fb651e81e33bdf15f9e5e6a0b8047a1c903be916e6"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.621253 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" event={"ID":"c5eb732b-9fb0-4c46-b550-dce7eebf78f5","Type":"ContainerStarted","Data":"5e563b2a1326c92e21cfb14e656bf009e0989746a31a5c2ce4bb72f9805c7f77"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.621263 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" event={"ID":"c5eb732b-9fb0-4c46-b550-dce7eebf78f5","Type":"ContainerStarted","Data":"cb84f597c18b6ac134b637efbe378840183c12a3585195036dbcb3013df3b0c2"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.634557 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vblgj" event={"ID":"e2da97ad-2e38-47fe-8223-dd5fac723e66","Type":"ContainerStarted","Data":"8484effbf5885f27a2915f1ac2a27524a7b6e2c9533a71138d9adbbafedd5b8e"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.635278 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gbnb2" podStartSLOduration=124.635265573 podStartE2EDuration="2m4.635265573s" podCreationTimestamp="2025-11-26 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:39.634603376 +0000 UTC m=+146.620914391" watchObservedRunningTime="2025-11-26 13:17:39.635265573 +0000 UTC m=+146.621576588" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.635834 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" podStartSLOduration=124.635830357 podStartE2EDuration="2m4.635830357s" podCreationTimestamp="2025-11-26 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:39.606408237 +0000 UTC m=+146.592719252" watchObservedRunningTime="2025-11-26 13:17:39.635830357 +0000 UTC m=+146.622141372" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.637398 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.639497 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.643217 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.143198982 +0000 UTC m=+147.129509997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.694453 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" event={"ID":"2ce131be-f5b6-4812-8e89-ed2702d6f47f","Type":"ContainerStarted","Data":"c0fd28d96dd1b9136ace4ff423fc14affa981eb2d92a5d8318b61fca21909e5c"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.694763 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.711160 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" event={"ID":"ec6afd64-e5b6-4851-a35e-db5a9490cdcb","Type":"ContainerStarted","Data":"0637b954dc5563ef98f843b2130e1c2fc5a7af5383dca2d04fc4e4ce5058527a"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.712091 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.713410 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh" event={"ID":"3dbff5f0-a78d-4a15-9652-f01b1e882a42","Type":"ContainerStarted","Data":"6853a3f373dedb9acc16da9fbca01956a1245bbbb1767364de17ec9932c1f226"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.713439 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh" event={"ID":"3dbff5f0-a78d-4a15-9652-f01b1e882a42","Type":"ContainerStarted","Data":"6de9f80eb32b997ca603bf48c0df0145daceb9250b430c4da3ea2df17f054201"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.717558 4747 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ksg5q container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.717625 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.718907 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" event={"ID":"d1c32fed-c28d-42e8-9bfb-e67af83e8c0b","Type":"ContainerStarted","Data":"5bd34e2ec37fc2f9863f02de4cbc628c20159c6afeb933d4c77d43bfddda7158"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.729098 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8d24" podStartSLOduration=125.729084222 podStartE2EDuration="2m5.729084222s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:39.727700307 +0000 UTC m=+146.714011322" watchObservedRunningTime="2025-11-26 13:17:39.729084222 +0000 UTC m=+146.715395237" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.740997 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.741544 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" event={"ID":"38ce8782-9463-46a1-b5ea-73c6d3c01589","Type":"ContainerStarted","Data":"a9aba7d1c17143b3a7ec69e5c87c35d16696fe5af9ae333962f393113bacd2fe"} Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.742404 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.242389186 +0000 UTC m=+147.228700201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.764457 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" podStartSLOduration=124.764441321 podStartE2EDuration="2m4.764441321s" podCreationTimestamp="2025-11-26 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:39.761386204 +0000 UTC m=+146.747697219" watchObservedRunningTime="2025-11-26 13:17:39.764441321 +0000 UTC m=+146.750752336" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.775318 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" event={"ID":"97e670ff-0ad9-4b6b-9017-91aa22702320","Type":"ContainerStarted","Data":"5ade3ab278064877c6a424a890ac73797aa0a3612b22b93ba86718385f56e79a"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.776329 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.795208 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-rpwnp" event={"ID":"78884d47-16c2-4ce8-8986-814701e6f244","Type":"ContainerStarted","Data":"a139dee5f5b586be6da5f53ee2a30463ba7b5d11b61953c25077b1df53ef68bf"} Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.812095 4747 patch_prober.go:28] interesting pod/downloads-7954f5f757-mr6p4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.812471 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mr6p4" podUID="1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.847098 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.857554 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vblgj" podStartSLOduration=7.857531011 podStartE2EDuration="7.857531011s" podCreationTimestamp="2025-11-26 13:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:39.812361945 +0000 UTC m=+146.798672960" watchObservedRunningTime="2025-11-26 13:17:39.857531011 +0000 UTC m=+146.843842026" Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.866464 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.366421575 +0000 UTC m=+147.352732590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.880791 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cq6xq" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.880839 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtfqg" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.880858 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5t5j" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.908383 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qhgpx" podStartSLOduration=125.908364459 podStartE2EDuration="2m5.908364459s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:39.898596713 +0000 UTC m=+146.884907728" watchObservedRunningTime="2025-11-26 13:17:39.908364459 +0000 UTC m=+146.894675474" Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.969131 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.969408 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.469378553 +0000 UTC m=+147.455689568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:39 crc kubenswrapper[4747]: I1126 13:17:39.969657 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:39 crc kubenswrapper[4747]: E1126 13:17:39.977600 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.477576729 +0000 UTC m=+147.463887744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.001756 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6zcnr" podStartSLOduration=126.001737697 podStartE2EDuration="2m6.001737697s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:39.938458516 +0000 UTC m=+146.924769531" watchObservedRunningTime="2025-11-26 13:17:40.001737697 +0000 UTC m=+146.988048702" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.002806 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" podStartSLOduration=126.002801253 podStartE2EDuration="2m6.002801253s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:40.001498341 +0000 UTC m=+146.987809366" watchObservedRunningTime="2025-11-26 13:17:40.002801253 +0000 UTC m=+146.989112268" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.071881 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pk2g" podStartSLOduration=126.07186619 podStartE2EDuration="2m6.07186619s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:40.034554142 +0000 UTC m=+147.020865157" watchObservedRunningTime="2025-11-26 13:17:40.07186619 +0000 UTC m=+147.058177205" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.074661 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:40 crc kubenswrapper[4747]: E1126 13:17:40.075043 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.575029099 +0000 UTC m=+147.561340114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.100602 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nfwnt" podStartSLOduration=8.100585822 podStartE2EDuration="8.100585822s" podCreationTimestamp="2025-11-26 13:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:40.072487425 +0000 UTC m=+147.058798440" watchObservedRunningTime="2025-11-26 13:17:40.100585822 +0000 UTC m=+147.086896837" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.101113 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" podStartSLOduration=125.101109125 podStartE2EDuration="2m5.101109125s" podCreationTimestamp="2025-11-26 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:40.099778822 +0000 UTC m=+147.086089837" watchObservedRunningTime="2025-11-26 13:17:40.101109125 +0000 UTC m=+147.087420140" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.156257 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scvdh" podStartSLOduration=126.156242431 podStartE2EDuration="2m6.156242431s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:40.148491846 +0000 UTC m=+147.134802861" watchObservedRunningTime="2025-11-26 13:17:40.156242431 +0000 UTC m=+147.142553446" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.175684 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:40 crc kubenswrapper[4747]: E1126 13:17:40.176030 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.676017628 +0000 UTC m=+147.662328633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.180563 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fnj9w" podStartSLOduration=126.180550692 podStartE2EDuration="2m6.180550692s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:40.179368153 +0000 UTC m=+147.165679168" watchObservedRunningTime="2025-11-26 13:17:40.180550692 +0000 UTC m=+147.166861707" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.242309 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zw88v" podStartSLOduration=126.242294155 podStartE2EDuration="2m6.242294155s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:40.241448754 +0000 UTC m=+147.227759769" watchObservedRunningTime="2025-11-26 13:17:40.242294155 +0000 UTC m=+147.228605160" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.265257 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jq2nt" podStartSLOduration=126.265241352 podStartE2EDuration="2m6.265241352s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:40.262586655 +0000 UTC m=+147.248897670" watchObservedRunningTime="2025-11-26 13:17:40.265241352 +0000 UTC m=+147.251552357" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.279759 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:40 crc kubenswrapper[4747]: E1126 13:17:40.280778 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.780740971 +0000 UTC m=+147.767051986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.304341 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" podStartSLOduration=125.304311744 podStartE2EDuration="2m5.304311744s" podCreationTimestamp="2025-11-26 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:40.302998291 +0000 UTC m=+147.289309306" watchObservedRunningTime="2025-11-26 13:17:40.304311744 +0000 UTC m=+147.290622759" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.382959 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:40 crc kubenswrapper[4747]: E1126 13:17:40.383423 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.883408273 +0000 UTC m=+147.869719288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.464784 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6s4vv"] Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.466853 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.484709 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.485280 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:40 crc kubenswrapper[4747]: E1126 13:17:40.485613 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:40.985599832 +0000 UTC m=+147.971910837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.494239 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6s4vv"] Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.515244 4747 patch_prober.go:28] interesting pod/router-default-5444994796-ctbq5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:17:40 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Nov 26 13:17:40 crc kubenswrapper[4747]: [+]process-running ok Nov 26 13:17:40 crc kubenswrapper[4747]: healthz check failed Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.515595 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ctbq5" podUID="4395cb3b-b843-4c5b-8312-adcd0887d777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.586699 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.586771 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s54tw\" (UniqueName: \"kubernetes.io/projected/43888409-bce4-40b7-bd9c-4c505b3929b0-kube-api-access-s54tw\") pod \"certified-operators-6s4vv\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.586842 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-catalog-content\") pod \"certified-operators-6s4vv\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.586887 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-utilities\") pod \"certified-operators-6s4vv\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: E1126 13:17:40.587406 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.087388681 +0000 UTC m=+148.073699686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.627564 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4f7qf"] Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.628518 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.633029 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.658531 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4f7qf"] Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.694934 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:40 crc kubenswrapper[4747]: E1126 13:17:40.695878 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.195856478 +0000 UTC m=+148.182167493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.695899 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s54tw\" (UniqueName: \"kubernetes.io/projected/43888409-bce4-40b7-bd9c-4c505b3929b0-kube-api-access-s54tw\") pod \"certified-operators-6s4vv\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.695975 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-catalog-content\") pod \"certified-operators-6s4vv\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.696024 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-utilities\") pod \"certified-operators-6s4vv\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.696165 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:40 crc kubenswrapper[4747]: E1126 13:17:40.696493 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.196483924 +0000 UTC m=+148.182794939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.697166 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-catalog-content\") pod \"certified-operators-6s4vv\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.697400 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-utilities\") pod \"certified-operators-6s4vv\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.738645 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s54tw\" (UniqueName: \"kubernetes.io/projected/43888409-bce4-40b7-bd9c-4c505b3929b0-kube-api-access-s54tw\") pod \"certified-operators-6s4vv\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.761360 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w4bp" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.797646 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.797799 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5vhw\" (UniqueName: \"kubernetes.io/projected/af471b2c-feb1-40af-bb70-4b41459277c3-kube-api-access-k5vhw\") pod \"community-operators-4f7qf\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.797888 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-utilities\") pod \"community-operators-4f7qf\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.797937 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-catalog-content\") pod \"community-operators-4f7qf\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:40 crc kubenswrapper[4747]: E1126 13:17:40.798094 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.298079078 +0000 UTC m=+148.284390093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.811372 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.824521 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5pxjc"] Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.825778 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.839692 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4ll78" event={"ID":"1fe80ce7-7278-4c19-b658-1ec3336280e8","Type":"ContainerStarted","Data":"21998e5f22116caeeb6af548538837f968c46d6e4013835891f329a5338178d0"} Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.847364 4747 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zqvkf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.847419 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" podUID="30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.847891 4747 patch_prober.go:28] interesting pod/downloads-7954f5f757-mr6p4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.847947 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mr6p4" podUID="1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.856173 4747 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qrchs container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.856462 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" podUID="1d11da9a-1232-421f-9b58-63cb5d519a0a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.871001 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.902912 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5vhw\" (UniqueName: \"kubernetes.io/projected/af471b2c-feb1-40af-bb70-4b41459277c3-kube-api-access-k5vhw\") pod \"community-operators-4f7qf\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.902953 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.903028 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-utilities\") pod \"community-operators-4f7qf\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.903076 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-catalog-content\") pod \"community-operators-4f7qf\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.903868 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-catalog-content\") pod \"community-operators-4f7qf\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:40 crc kubenswrapper[4747]: E1126 13:17:40.903975 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.403868908 +0000 UTC m=+148.390179923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.904139 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-utilities\") pod \"community-operators-4f7qf\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.928749 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5pxjc"] Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.984130 4747 patch_prober.go:28] interesting pod/apiserver-76f77b778f-x8z2x container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]log ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]etcd ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]poststarthook/generic-apiserver-start-informers ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]poststarthook/max-in-flight-filter ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 26 13:17:40 crc kubenswrapper[4747]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 26 13:17:40 crc kubenswrapper[4747]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 26 13:17:40 crc kubenswrapper[4747]: [+]poststarthook/project.openshift.io-projectcache ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]poststarthook/openshift.io-startinformers ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 26 13:17:40 crc kubenswrapper[4747]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 26 13:17:40 crc kubenswrapper[4747]: livez check failed Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.984502 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" podUID="d1c32fed-c28d-42e8-9bfb-e67af83e8c0b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:40 crc kubenswrapper[4747]: I1126 13:17:40.990796 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5vhw\" (UniqueName: \"kubernetes.io/projected/af471b2c-feb1-40af-bb70-4b41459277c3-kube-api-access-k5vhw\") pod \"community-operators-4f7qf\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.006905 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.007904 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-catalog-content\") pod \"certified-operators-5pxjc\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.008476 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-utilities\") pod \"certified-operators-5pxjc\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.008624 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r9vk\" (UniqueName: \"kubernetes.io/projected/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-kube-api-access-6r9vk\") pod \"certified-operators-5pxjc\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.010101 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.510078518 +0000 UTC m=+148.496389533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.071876 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-94gg7"] Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.072719 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.127445 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-utilities\") pod \"certified-operators-5pxjc\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.127517 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r9vk\" (UniqueName: \"kubernetes.io/projected/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-kube-api-access-6r9vk\") pod \"certified-operators-5pxjc\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.127637 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.127719 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-catalog-content\") pod \"certified-operators-5pxjc\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.128993 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-catalog-content\") pod \"certified-operators-5pxjc\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.129541 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.629530322 +0000 UTC m=+148.615841337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.130111 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-utilities\") pod \"certified-operators-5pxjc\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.138022 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94gg7"] Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.237497 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.238681 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjcp6\" (UniqueName: \"kubernetes.io/projected/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-kube-api-access-gjcp6\") pod \"community-operators-94gg7\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.238748 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-utilities\") pod \"community-operators-94gg7\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.238784 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-catalog-content\") pod \"community-operators-94gg7\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.239288 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.73925621 +0000 UTC m=+148.725567225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.240168 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r9vk\" (UniqueName: \"kubernetes.io/projected/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-kube-api-access-6r9vk\") pod \"certified-operators-5pxjc\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.270323 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.343756 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjcp6\" (UniqueName: \"kubernetes.io/projected/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-kube-api-access-gjcp6\") pod \"community-operators-94gg7\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.343839 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-utilities\") pod \"community-operators-94gg7\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.343884 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.343902 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-catalog-content\") pod \"community-operators-94gg7\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.344436 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-catalog-content\") pod \"community-operators-94gg7\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.344907 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-utilities\") pod \"community-operators-94gg7\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.345143 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.845131932 +0000 UTC m=+148.831442947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.386223 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjcp6\" (UniqueName: \"kubernetes.io/projected/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-kube-api-access-gjcp6\") pod \"community-operators-94gg7\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.439418 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.449516 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.450002 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.450117 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.450179 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.450255 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:41.950222545 +0000 UTC m=+148.936533560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.450311 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.450420 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.454694 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.461039 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.468650 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.469548 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.474519 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.475200 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.495777 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.496412 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.516548 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.526815 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6s4vv"] Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.531071 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.531745 4747 patch_prober.go:28] interesting pod/router-default-5444994796-ctbq5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:17:41 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Nov 26 13:17:41 crc kubenswrapper[4747]: [+]process-running ok Nov 26 13:17:41 crc kubenswrapper[4747]: healthz check failed Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.531780 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ctbq5" podUID="4395cb3b-b843-4c5b-8312-adcd0887d777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.550542 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.556756 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/633162d1-2efc-4d2c-9310-ff04a8d35a04-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"633162d1-2efc-4d2c-9310-ff04a8d35a04\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.556831 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.556856 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/633162d1-2efc-4d2c-9310-ff04a8d35a04-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"633162d1-2efc-4d2c-9310-ff04a8d35a04\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.557204 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.057190514 +0000 UTC m=+149.043501529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.621855 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.657682 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.658296 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/633162d1-2efc-4d2c-9310-ff04a8d35a04-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"633162d1-2efc-4d2c-9310-ff04a8d35a04\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.658365 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/633162d1-2efc-4d2c-9310-ff04a8d35a04-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"633162d1-2efc-4d2c-9310-ff04a8d35a04\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.658438 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/633162d1-2efc-4d2c-9310-ff04a8d35a04-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"633162d1-2efc-4d2c-9310-ff04a8d35a04\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.658535 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.158510861 +0000 UTC m=+149.144821876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.706762 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/633162d1-2efc-4d2c-9310-ff04a8d35a04-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"633162d1-2efc-4d2c-9310-ff04a8d35a04\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.760009 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.760510 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.260494695 +0000 UTC m=+149.246805710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.827037 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.866563 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.866745 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.366716166 +0000 UTC m=+149.353038751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.867209 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.867457 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.367449294 +0000 UTC m=+149.353760309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.905362 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4f7qf"] Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.935830 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4ll78" event={"ID":"1fe80ce7-7278-4c19-b658-1ec3336280e8","Type":"ContainerStarted","Data":"e269c33f8cdbd392407467f2f219c3c3877e32e15c909d840ff6652dab09b142"} Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.950328 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s4vv" event={"ID":"43888409-bce4-40b7-bd9c-4c505b3929b0","Type":"ContainerStarted","Data":"fee48446d3e4c11ad79df68dbf940fa1ecfceff3733e9caebb87b1a4a2869309"} Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.950358 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s4vv" event={"ID":"43888409-bce4-40b7-bd9c-4c505b3929b0","Type":"ContainerStarted","Data":"3fdb4fda7a0a0c8ce0212849b656416943ac3cb45a19534b7a0988553dc83a49"} Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.970152 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.970364 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.470333611 +0000 UTC m=+149.456644616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:41 crc kubenswrapper[4747]: I1126 13:17:41.970553 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:41 crc kubenswrapper[4747]: E1126 13:17:41.971992 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.471981033 +0000 UTC m=+149.458292048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.074299 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5pxjc"] Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.075012 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.075228 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.575199598 +0000 UTC m=+149.561510613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.075476 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.076078 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.576029569 +0000 UTC m=+149.562340584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: W1126 13:17:42.112794 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b88059c_90c0_4fb3_9981_7324f7f2ce1f.slice/crio-5693c03d32fb3685c3c6393542ff62ec6b8be17f625b6eae09ed11b82d206840 WatchSource:0}: Error finding container 5693c03d32fb3685c3c6393542ff62ec6b8be17f625b6eae09ed11b82d206840: Status 404 returned error can't find the container with id 5693c03d32fb3685c3c6393542ff62ec6b8be17f625b6eae09ed11b82d206840 Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.166534 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94gg7"] Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.176180 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.176904 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.676885504 +0000 UTC m=+149.663196519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.181917 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.182500 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.682468635 +0000 UTC m=+149.668779640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.283285 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.283650 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.783635478 +0000 UTC m=+149.769946493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.314546 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 26 13:17:42 crc kubenswrapper[4747]: W1126 13:17:42.330930 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod633162d1_2efc_4d2c_9310_ff04a8d35a04.slice/crio-9898d550fab3a7d7125693d6483bd9eafb1e650b31ef1d7cbb4e32d5799166ad WatchSource:0}: Error finding container 9898d550fab3a7d7125693d6483bd9eafb1e650b31ef1d7cbb4e32d5799166ad: Status 404 returned error can't find the container with id 9898d550fab3a7d7125693d6483bd9eafb1e650b31ef1d7cbb4e32d5799166ad Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.394548 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.395526 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.895511811 +0000 UTC m=+149.881822826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.404503 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rqbp5"] Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.405500 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.415118 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqbp5"] Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.415445 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.478555 4747 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.496217 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.496583 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-catalog-content\") pod \"redhat-marketplace-rqbp5\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.496623 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftg8\" (UniqueName: \"kubernetes.io/projected/4d0101a0-a045-41c3-8387-7a84e8236d65-kube-api-access-lftg8\") pod \"redhat-marketplace-rqbp5\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.496676 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-utilities\") pod \"redhat-marketplace-rqbp5\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.496811 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:42.996786887 +0000 UTC m=+149.983097902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.513938 4747 patch_prober.go:28] interesting pod/router-default-5444994796-ctbq5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:17:42 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Nov 26 13:17:42 crc kubenswrapper[4747]: [+]process-running ok Nov 26 13:17:42 crc kubenswrapper[4747]: healthz check failed Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.514031 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ctbq5" podUID="4395cb3b-b843-4c5b-8312-adcd0887d777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.598098 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-utilities\") pod \"redhat-marketplace-rqbp5\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.598176 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-catalog-content\") pod \"redhat-marketplace-rqbp5\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.598208 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lftg8\" (UniqueName: \"kubernetes.io/projected/4d0101a0-a045-41c3-8387-7a84e8236d65-kube-api-access-lftg8\") pod \"redhat-marketplace-rqbp5\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.598233 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.598523 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:43.098508245 +0000 UTC m=+150.084819260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.598669 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-utilities\") pod \"redhat-marketplace-rqbp5\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.598711 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-catalog-content\") pod \"redhat-marketplace-rqbp5\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.619881 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lftg8\" (UniqueName: \"kubernetes.io/projected/4d0101a0-a045-41c3-8387-7a84e8236d65-kube-api-access-lftg8\") pod \"redhat-marketplace-rqbp5\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.699662 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.699830 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:43.199804022 +0000 UTC m=+150.186115037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.700345 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.700657 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:43.200643303 +0000 UTC m=+150.186954318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.798950 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lpscr"] Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.800162 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.800848 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.801016 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:43.300990446 +0000 UTC m=+150.287301461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.801188 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.801457 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:43.301446377 +0000 UTC m=+150.287757392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.807482 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpscr"] Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.836084 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.902566 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.902788 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:17:43.402732714 +0000 UTC m=+150.389043729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.902861 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.902953 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2cjs\" (UniqueName: \"kubernetes.io/projected/a8e66af2-1f24-47a6-9315-4ac97f474115-kube-api-access-d2cjs\") pod \"redhat-marketplace-lpscr\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.903142 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-catalog-content\") pod \"redhat-marketplace-lpscr\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.903328 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-utilities\") pod \"redhat-marketplace-lpscr\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:42 crc kubenswrapper[4747]: E1126 13:17:42.903494 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:17:43.403472322 +0000 UTC m=+150.389783337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sddmq" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.955243 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a4d76ab485479a11da5db43624f75a91b525d9932986ee891a4ed2f442d157fd"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.955302 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"af1a6000c1b2784e61a080d015ff07ff0482c6eb3251125a26be3165f5e23013"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.959573 4747 generic.go:334] "Generic (PLEG): container finished" podID="af471b2c-feb1-40af-bb70-4b41459277c3" containerID="0c78a08b96e38961f36b728671a374d5a9d6829552f2822549086c62666e5a5c" exitCode=0 Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.959653 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f7qf" event={"ID":"af471b2c-feb1-40af-bb70-4b41459277c3","Type":"ContainerDied","Data":"0c78a08b96e38961f36b728671a374d5a9d6829552f2822549086c62666e5a5c"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.959679 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f7qf" event={"ID":"af471b2c-feb1-40af-bb70-4b41459277c3","Type":"ContainerStarted","Data":"50fafa20d936823292db1af99b1368e00f0beaf52419dc270d0865aced316598"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.961328 4747 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-26T13:17:42.47857732Z","Handler":null,"Name":""} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.963156 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"008a88398c498ddbf1957d9535b55aaa458dd2db5e26b350a183a3299a3e2e01"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.963177 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"172382f0fa971c0e94103aa7d88f8b5f70429ad33c46cf3fa2f7ff027325f207"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.963328 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.965880 4747 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.965913 4747 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.967097 4747 generic.go:334] "Generic (PLEG): container finished" podID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerID="47a0a85a913cd92d7eccdfc78f48ef0100cfb44bc4faa41c4de772bcf8a2df82" exitCode=0 Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.967124 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94gg7" event={"ID":"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906","Type":"ContainerDied","Data":"47a0a85a913cd92d7eccdfc78f48ef0100cfb44bc4faa41c4de772bcf8a2df82"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.967176 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94gg7" event={"ID":"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906","Type":"ContainerStarted","Data":"4fc7eaf98e5040a827cb142c93457cdffde8143f4583193eb17346489184c5d3"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.984264 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4ll78" event={"ID":"1fe80ce7-7278-4c19-b658-1ec3336280e8","Type":"ContainerStarted","Data":"c1ee8b174972f5d7a581713074f98fb56ac89dd7cc511b8bf1aea88cc6c1d413"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.985906 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"633162d1-2efc-4d2c-9310-ff04a8d35a04","Type":"ContainerStarted","Data":"b7555b51a23fddef79f3fba07dcfdfb02cf31058f091e3196ff72d390f9eeb19"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.985950 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"633162d1-2efc-4d2c-9310-ff04a8d35a04","Type":"ContainerStarted","Data":"9898d550fab3a7d7125693d6483bd9eafb1e650b31ef1d7cbb4e32d5799166ad"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.995806 4747 generic.go:334] "Generic (PLEG): container finished" podID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerID="ca3f4d8e996a9eedfa8aaa3bec406b732bf8720fea549e4e373b9ceec25aae78" exitCode=0 Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.996136 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pxjc" event={"ID":"7b88059c-90c0-4fb3-9981-7324f7f2ce1f","Type":"ContainerDied","Data":"ca3f4d8e996a9eedfa8aaa3bec406b732bf8720fea549e4e373b9ceec25aae78"} Nov 26 13:17:42 crc kubenswrapper[4747]: I1126 13:17:42.996463 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pxjc" event={"ID":"7b88059c-90c0-4fb3-9981-7324f7f2ce1f","Type":"ContainerStarted","Data":"5693c03d32fb3685c3c6393542ff62ec6b8be17f625b6eae09ed11b82d206840"} Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:42.998604 4747 generic.go:334] "Generic (PLEG): container finished" podID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerID="fee48446d3e4c11ad79df68dbf940fa1ecfceff3733e9caebb87b1a4a2869309" exitCode=0 Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:42.998650 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s4vv" event={"ID":"43888409-bce4-40b7-bd9c-4c505b3929b0","Type":"ContainerDied","Data":"fee48446d3e4c11ad79df68dbf940fa1ecfceff3733e9caebb87b1a4a2869309"} Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.003030 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e059a6b86d182e42a6692b4fc69405cd5d420e720e90e50dfc389c93350560f0"} Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.003113 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"21fcccb760c9e89baf784b8898d35924e353428089e37c767d8ba54481cb1af7"} Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.003389 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.004742 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.004936 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-utilities\") pod \"redhat-marketplace-lpscr\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.004998 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2cjs\" (UniqueName: \"kubernetes.io/projected/a8e66af2-1f24-47a6-9315-4ac97f474115-kube-api-access-d2cjs\") pod \"redhat-marketplace-lpscr\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.005090 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-catalog-content\") pod \"redhat-marketplace-lpscr\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.007351 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-catalog-content\") pod \"redhat-marketplace-lpscr\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.007524 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-utilities\") pod \"redhat-marketplace-lpscr\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.038128 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.043990 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2cjs\" (UniqueName: \"kubernetes.io/projected/a8e66af2-1f24-47a6-9315-4ac97f474115-kube-api-access-d2cjs\") pod \"redhat-marketplace-lpscr\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.071123 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqbp5"] Nov 26 13:17:43 crc kubenswrapper[4747]: W1126 13:17:43.078671 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d0101a0_a045_41c3_8387_7a84e8236d65.slice/crio-badc636c97c9afc0e0f21ca7a3eb68d1f305c586eb1b89770678381c2175d13a WatchSource:0}: Error finding container badc636c97c9afc0e0f21ca7a3eb68d1f305c586eb1b89770678381c2175d13a: Status 404 returned error can't find the container with id badc636c97c9afc0e0f21ca7a3eb68d1f305c586eb1b89770678381c2175d13a Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.080436 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.08042221 podStartE2EDuration="2.08042221s" podCreationTimestamp="2025-11-26 13:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:43.077404444 +0000 UTC m=+150.063715459" watchObservedRunningTime="2025-11-26 13:17:43.08042221 +0000 UTC m=+150.066733225" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.110248 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.115555 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.118945 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4ll78" podStartSLOduration=11.118929838 podStartE2EDuration="11.118929838s" podCreationTimestamp="2025-11-26 13:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:43.118019135 +0000 UTC m=+150.104330170" watchObservedRunningTime="2025-11-26 13:17:43.118929838 +0000 UTC m=+150.105240853" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.233070 4747 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.233573 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.301906 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sddmq\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.336711 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpscr"] Nov 26 13:17:43 crc kubenswrapper[4747]: W1126 13:17:43.346116 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e66af2_1f24_47a6_9315_4ac97f474115.slice/crio-6e2ec24c3f4ad710f8ec410f5cb7ab3a110d0cd1ac6c461a01e671aed8722972 WatchSource:0}: Error finding container 6e2ec24c3f4ad710f8ec410f5cb7ab3a110d0cd1ac6c461a01e671aed8722972: Status 404 returned error can't find the container with id 6e2ec24c3f4ad710f8ec410f5cb7ab3a110d0cd1ac6c461a01e671aed8722972 Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.450646 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.510111 4747 patch_prober.go:28] interesting pod/router-default-5444994796-ctbq5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:17:43 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Nov 26 13:17:43 crc kubenswrapper[4747]: [+]process-running ok Nov 26 13:17:43 crc kubenswrapper[4747]: healthz check failed Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.510161 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ctbq5" podUID="4395cb3b-b843-4c5b-8312-adcd0887d777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.602025 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hf2r6"] Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.603149 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.605206 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.620425 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hf2r6"] Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.719457 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-catalog-content\") pod \"redhat-operators-hf2r6\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.719554 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-utilities\") pod \"redhat-operators-hf2r6\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.719614 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgwn6\" (UniqueName: \"kubernetes.io/projected/e56046c3-771a-4f54-afba-59f160f1e415-kube-api-access-tgwn6\") pod \"redhat-operators-hf2r6\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.812229 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.820761 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-utilities\") pod \"redhat-operators-hf2r6\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.820845 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgwn6\" (UniqueName: \"kubernetes.io/projected/e56046c3-771a-4f54-afba-59f160f1e415-kube-api-access-tgwn6\") pod \"redhat-operators-hf2r6\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.820894 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-catalog-content\") pod \"redhat-operators-hf2r6\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.822702 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-catalog-content\") pod \"redhat-operators-hf2r6\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.823286 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-utilities\") pod \"redhat-operators-hf2r6\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.855520 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgwn6\" (UniqueName: \"kubernetes.io/projected/e56046c3-771a-4f54-afba-59f160f1e415-kube-api-access-tgwn6\") pod \"redhat-operators-hf2r6\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:43 crc kubenswrapper[4747]: I1126 13:17:43.925713 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sddmq"] Nov 26 13:17:43 crc kubenswrapper[4747]: W1126 13:17:43.981018 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea6220c2_d975_49c7_86c4_d71c809cc426.slice/crio-444837625b3ad46c758bfbcd203c993b1c66811b637c608b9ba915e4fc618790 WatchSource:0}: Error finding container 444837625b3ad46c758bfbcd203c993b1c66811b637c608b9ba915e4fc618790: Status 404 returned error can't find the container with id 444837625b3ad46c758bfbcd203c993b1c66811b637c608b9ba915e4fc618790 Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:43.998805 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.005062 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cwrtn"] Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.006699 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.013473 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwrtn"] Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.027506 4747 generic.go:334] "Generic (PLEG): container finished" podID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerID="8b55db0ed6d48097237fceefa1611562bb239f82b8da04efe6618ee9b25bf592" exitCode=0 Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.027592 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpscr" event={"ID":"a8e66af2-1f24-47a6-9315-4ac97f474115","Type":"ContainerDied","Data":"8b55db0ed6d48097237fceefa1611562bb239f82b8da04efe6618ee9b25bf592"} Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.027658 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpscr" event={"ID":"a8e66af2-1f24-47a6-9315-4ac97f474115","Type":"ContainerStarted","Data":"6e2ec24c3f4ad710f8ec410f5cb7ab3a110d0cd1ac6c461a01e671aed8722972"} Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.036024 4747 generic.go:334] "Generic (PLEG): container finished" podID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerID="fe8a3b65da9659bb86de22759f21cfde0d93bfad3e6cc372dafe5936a87b57aa" exitCode=0 Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.036189 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqbp5" event={"ID":"4d0101a0-a045-41c3-8387-7a84e8236d65","Type":"ContainerDied","Data":"fe8a3b65da9659bb86de22759f21cfde0d93bfad3e6cc372dafe5936a87b57aa"} Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.036237 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqbp5" event={"ID":"4d0101a0-a045-41c3-8387-7a84e8236d65","Type":"ContainerStarted","Data":"badc636c97c9afc0e0f21ca7a3eb68d1f305c586eb1b89770678381c2175d13a"} Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.044847 4747 generic.go:334] "Generic (PLEG): container finished" podID="aeb24804-3be2-46c5-b1a6-494b7b271aee" containerID="a14f4c82f14ff7623f0976f6c254fc187b344e4a8c0a6f9b6b404c22944aba9e" exitCode=0 Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.044925 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" event={"ID":"aeb24804-3be2-46c5-b1a6-494b7b271aee","Type":"ContainerDied","Data":"a14f4c82f14ff7623f0976f6c254fc187b344e4a8c0a6f9b6b404c22944aba9e"} Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.049082 4747 generic.go:334] "Generic (PLEG): container finished" podID="633162d1-2efc-4d2c-9310-ff04a8d35a04" containerID="b7555b51a23fddef79f3fba07dcfdfb02cf31058f091e3196ff72d390f9eeb19" exitCode=0 Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.049143 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"633162d1-2efc-4d2c-9310-ff04a8d35a04","Type":"ContainerDied","Data":"b7555b51a23fddef79f3fba07dcfdfb02cf31058f091e3196ff72d390f9eeb19"} Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.068192 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" event={"ID":"ea6220c2-d975-49c7-86c4-d71c809cc426","Type":"ContainerStarted","Data":"444837625b3ad46c758bfbcd203c993b1c66811b637c608b9ba915e4fc618790"} Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.125411 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-catalog-content\") pod \"redhat-operators-cwrtn\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.125928 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxb6j\" (UniqueName: \"kubernetes.io/projected/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-kube-api-access-wxb6j\") pod \"redhat-operators-cwrtn\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.125966 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-utilities\") pod \"redhat-operators-cwrtn\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.228504 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-catalog-content\") pod \"redhat-operators-cwrtn\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.228554 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxb6j\" (UniqueName: \"kubernetes.io/projected/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-kube-api-access-wxb6j\") pod \"redhat-operators-cwrtn\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.228582 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-utilities\") pod \"redhat-operators-cwrtn\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.232582 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-catalog-content\") pod \"redhat-operators-cwrtn\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.232722 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-utilities\") pod \"redhat-operators-cwrtn\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.260273 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxb6j\" (UniqueName: \"kubernetes.io/projected/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-kube-api-access-wxb6j\") pod \"redhat-operators-cwrtn\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.338608 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.399343 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hf2r6"] Nov 26 13:17:44 crc kubenswrapper[4747]: W1126 13:17:44.422571 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode56046c3_771a_4f54_afba_59f160f1e415.slice/crio-e10d8680edd28da58948217b66d45997956c5a4b29753b8ed98f7da8ba904766 WatchSource:0}: Error finding container e10d8680edd28da58948217b66d45997956c5a4b29753b8ed98f7da8ba904766: Status 404 returned error can't find the container with id e10d8680edd28da58948217b66d45997956c5a4b29753b8ed98f7da8ba904766 Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.504829 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.510208 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-x8z2x" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.511140 4747 patch_prober.go:28] interesting pod/router-default-5444994796-ctbq5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:17:44 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Nov 26 13:17:44 crc kubenswrapper[4747]: [+]process-running ok Nov 26 13:17:44 crc kubenswrapper[4747]: healthz check failed Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.511203 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ctbq5" podUID="4395cb3b-b843-4c5b-8312-adcd0887d777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.660178 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.660709 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.673334 4747 patch_prober.go:28] interesting pod/console-f9d7485db-tv788 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 26 13:17:44 crc kubenswrapper[4747]: I1126 13:17:44.673394 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tv788" podUID="c5734b55-b478-4525-b5da-88b63b4812d0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.037744 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwrtn"] Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.115928 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwrtn" event={"ID":"12cb1fe3-c93c-4a2b-b13e-c660d9b34012","Type":"ContainerStarted","Data":"0834d714928b9ca2839042ec321b865594b89ceb977f74fa06a59c0b1a73ea24"} Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.130180 4747 generic.go:334] "Generic (PLEG): container finished" podID="e56046c3-771a-4f54-afba-59f160f1e415" containerID="fc05f0e67799a6300012269ff17c6af42dbdf62a28d6f7d2ba30f46421e253ff" exitCode=0 Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.130240 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hf2r6" event={"ID":"e56046c3-771a-4f54-afba-59f160f1e415","Type":"ContainerDied","Data":"fc05f0e67799a6300012269ff17c6af42dbdf62a28d6f7d2ba30f46421e253ff"} Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.130305 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hf2r6" event={"ID":"e56046c3-771a-4f54-afba-59f160f1e415","Type":"ContainerStarted","Data":"e10d8680edd28da58948217b66d45997956c5a4b29753b8ed98f7da8ba904766"} Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.177527 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" event={"ID":"ea6220c2-d975-49c7-86c4-d71c809cc426","Type":"ContainerStarted","Data":"ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f"} Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.177750 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.207340 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" podStartSLOduration=131.207322565 podStartE2EDuration="2m11.207322565s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:45.206706539 +0000 UTC m=+152.193017554" watchObservedRunningTime="2025-11-26 13:17:45.207322565 +0000 UTC m=+152.193633590" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.373980 4747 patch_prober.go:28] interesting pod/downloads-7954f5f757-mr6p4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.374022 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mr6p4" podUID="1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.374407 4747 patch_prober.go:28] interesting pod/downloads-7954f5f757-mr6p4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.374425 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mr6p4" podUID="1cbbcbb0-8ef9-4ef3-86e6-9db673fa4cba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.511977 4747 patch_prober.go:28] interesting pod/router-default-5444994796-ctbq5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:17:45 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Nov 26 13:17:45 crc kubenswrapper[4747]: [+]process-running ok Nov 26 13:17:45 crc kubenswrapper[4747]: healthz check failed Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.514223 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ctbq5" podUID="4395cb3b-b843-4c5b-8312-adcd0887d777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.515700 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.580843 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.665567 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/633162d1-2efc-4d2c-9310-ff04a8d35a04-kubelet-dir\") pod \"633162d1-2efc-4d2c-9310-ff04a8d35a04\" (UID: \"633162d1-2efc-4d2c-9310-ff04a8d35a04\") " Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.665717 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/633162d1-2efc-4d2c-9310-ff04a8d35a04-kube-api-access\") pod \"633162d1-2efc-4d2c-9310-ff04a8d35a04\" (UID: \"633162d1-2efc-4d2c-9310-ff04a8d35a04\") " Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.673207 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633162d1-2efc-4d2c-9310-ff04a8d35a04-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "633162d1-2efc-4d2c-9310-ff04a8d35a04" (UID: "633162d1-2efc-4d2c-9310-ff04a8d35a04"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.680324 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633162d1-2efc-4d2c-9310-ff04a8d35a04-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "633162d1-2efc-4d2c-9310-ff04a8d35a04" (UID: "633162d1-2efc-4d2c-9310-ff04a8d35a04"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.751350 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.769796 4747 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/633162d1-2efc-4d2c-9310-ff04a8d35a04-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.769827 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/633162d1-2efc-4d2c-9310-ff04a8d35a04-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.827450 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qrchs" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.838620 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.874392 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aeb24804-3be2-46c5-b1a6-494b7b271aee-secret-volume\") pod \"aeb24804-3be2-46c5-b1a6-494b7b271aee\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.874480 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcsqx\" (UniqueName: \"kubernetes.io/projected/aeb24804-3be2-46c5-b1a6-494b7b271aee-kube-api-access-tcsqx\") pod \"aeb24804-3be2-46c5-b1a6-494b7b271aee\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.874505 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeb24804-3be2-46c5-b1a6-494b7b271aee-config-volume\") pod \"aeb24804-3be2-46c5-b1a6-494b7b271aee\" (UID: \"aeb24804-3be2-46c5-b1a6-494b7b271aee\") " Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.876083 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb24804-3be2-46c5-b1a6-494b7b271aee-config-volume" (OuterVolumeSpecName: "config-volume") pod "aeb24804-3be2-46c5-b1a6-494b7b271aee" (UID: "aeb24804-3be2-46c5-b1a6-494b7b271aee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.881312 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb24804-3be2-46c5-b1a6-494b7b271aee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aeb24804-3be2-46c5-b1a6-494b7b271aee" (UID: "aeb24804-3be2-46c5-b1a6-494b7b271aee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.882659 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb24804-3be2-46c5-b1a6-494b7b271aee-kube-api-access-tcsqx" (OuterVolumeSpecName: "kube-api-access-tcsqx") pod "aeb24804-3be2-46c5-b1a6-494b7b271aee" (UID: "aeb24804-3be2-46c5-b1a6-494b7b271aee"). InnerVolumeSpecName "kube-api-access-tcsqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.977009 4747 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeb24804-3be2-46c5-b1a6-494b7b271aee-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.977044 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcsqx\" (UniqueName: \"kubernetes.io/projected/aeb24804-3be2-46c5-b1a6-494b7b271aee-kube-api-access-tcsqx\") on node \"crc\" DevicePath \"\"" Nov 26 13:17:45 crc kubenswrapper[4747]: I1126 13:17:45.977069 4747 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aeb24804-3be2-46c5-b1a6-494b7b271aee-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:17:46 crc kubenswrapper[4747]: I1126 13:17:46.256745 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" event={"ID":"aeb24804-3be2-46c5-b1a6-494b7b271aee","Type":"ContainerDied","Data":"5e029d72e3bae615a62be6123e483d63204d7efde42a54b39a134b362409bff6"} Nov 26 13:17:46 crc kubenswrapper[4747]: I1126 13:17:46.257247 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e029d72e3bae615a62be6123e483d63204d7efde42a54b39a134b362409bff6" Nov 26 13:17:46 crc kubenswrapper[4747]: I1126 13:17:46.257321 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-vsvbk" Nov 26 13:17:46 crc kubenswrapper[4747]: I1126 13:17:46.270190 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"633162d1-2efc-4d2c-9310-ff04a8d35a04","Type":"ContainerDied","Data":"9898d550fab3a7d7125693d6483bd9eafb1e650b31ef1d7cbb4e32d5799166ad"} Nov 26 13:17:46 crc kubenswrapper[4747]: I1126 13:17:46.270231 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9898d550fab3a7d7125693d6483bd9eafb1e650b31ef1d7cbb4e32d5799166ad" Nov 26 13:17:46 crc kubenswrapper[4747]: I1126 13:17:46.270289 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:17:46 crc kubenswrapper[4747]: I1126 13:17:46.274993 4747 generic.go:334] "Generic (PLEG): container finished" podID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerID="c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02" exitCode=0 Nov 26 13:17:46 crc kubenswrapper[4747]: I1126 13:17:46.276017 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwrtn" event={"ID":"12cb1fe3-c93c-4a2b-b13e-c660d9b34012","Type":"ContainerDied","Data":"c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02"} Nov 26 13:17:46 crc kubenswrapper[4747]: I1126 13:17:46.510819 4747 patch_prober.go:28] interesting pod/router-default-5444994796-ctbq5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:17:46 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Nov 26 13:17:46 crc kubenswrapper[4747]: [+]process-running ok Nov 26 13:17:46 crc kubenswrapper[4747]: healthz check failed Nov 26 13:17:46 crc kubenswrapper[4747]: I1126 13:17:46.510923 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ctbq5" podUID="4395cb3b-b843-4c5b-8312-adcd0887d777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.511095 4747 patch_prober.go:28] interesting pod/router-default-5444994796-ctbq5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:17:47 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Nov 26 13:17:47 crc kubenswrapper[4747]: [+]process-running ok Nov 26 13:17:47 crc kubenswrapper[4747]: healthz check failed Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.511147 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ctbq5" podUID="4395cb3b-b843-4c5b-8312-adcd0887d777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.557923 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 26 13:17:47 crc kubenswrapper[4747]: E1126 13:17:47.558139 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633162d1-2efc-4d2c-9310-ff04a8d35a04" containerName="pruner" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.558152 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="633162d1-2efc-4d2c-9310-ff04a8d35a04" containerName="pruner" Nov 26 13:17:47 crc kubenswrapper[4747]: E1126 13:17:47.558166 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb24804-3be2-46c5-b1a6-494b7b271aee" containerName="collect-profiles" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.558174 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb24804-3be2-46c5-b1a6-494b7b271aee" containerName="collect-profiles" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.558295 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="633162d1-2efc-4d2c-9310-ff04a8d35a04" containerName="pruner" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.558312 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb24804-3be2-46c5-b1a6-494b7b271aee" containerName="collect-profiles" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.558647 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.560927 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.561182 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.575199 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.706418 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db4e936-6acc-46a1-ad3f-4b771000d54b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9db4e936-6acc-46a1-ad3f-4b771000d54b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.706488 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9db4e936-6acc-46a1-ad3f-4b771000d54b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9db4e936-6acc-46a1-ad3f-4b771000d54b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.808647 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db4e936-6acc-46a1-ad3f-4b771000d54b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9db4e936-6acc-46a1-ad3f-4b771000d54b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.808702 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9db4e936-6acc-46a1-ad3f-4b771000d54b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9db4e936-6acc-46a1-ad3f-4b771000d54b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.808775 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9db4e936-6acc-46a1-ad3f-4b771000d54b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9db4e936-6acc-46a1-ad3f-4b771000d54b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.844365 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db4e936-6acc-46a1-ad3f-4b771000d54b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9db4e936-6acc-46a1-ad3f-4b771000d54b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:17:47 crc kubenswrapper[4747]: I1126 13:17:47.891310 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:17:48 crc kubenswrapper[4747]: I1126 13:17:48.427821 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 26 13:17:48 crc kubenswrapper[4747]: I1126 13:17:48.511936 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:48 crc kubenswrapper[4747]: I1126 13:17:48.518595 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ctbq5" Nov 26 13:17:49 crc kubenswrapper[4747]: I1126 13:17:49.313341 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9db4e936-6acc-46a1-ad3f-4b771000d54b","Type":"ContainerStarted","Data":"1c364405c88355f58b7e1618b7b880872774aa3e795730ddb09cd114ed5faf0c"} Nov 26 13:17:50 crc kubenswrapper[4747]: I1126 13:17:50.319944 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nfwnt" Nov 26 13:17:50 crc kubenswrapper[4747]: I1126 13:17:50.326240 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9db4e936-6acc-46a1-ad3f-4b771000d54b","Type":"ContainerStarted","Data":"08df269366136c96465a4597493e732a415e975919fee0665f545ec789a88342"} Nov 26 13:17:50 crc kubenswrapper[4747]: I1126 13:17:50.405239 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.4052200790000002 podStartE2EDuration="3.405220079s" podCreationTimestamp="2025-11-26 13:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:17:50.388847847 +0000 UTC m=+157.375158862" watchObservedRunningTime="2025-11-26 13:17:50.405220079 +0000 UTC m=+157.391531094" Nov 26 13:17:51 crc kubenswrapper[4747]: I1126 13:17:51.346279 4747 generic.go:334] "Generic (PLEG): container finished" podID="9db4e936-6acc-46a1-ad3f-4b771000d54b" containerID="08df269366136c96465a4597493e732a415e975919fee0665f545ec789a88342" exitCode=0 Nov 26 13:17:51 crc kubenswrapper[4747]: I1126 13:17:51.346335 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9db4e936-6acc-46a1-ad3f-4b771000d54b","Type":"ContainerDied","Data":"08df269366136c96465a4597493e732a415e975919fee0665f545ec789a88342"} Nov 26 13:17:54 crc kubenswrapper[4747]: I1126 13:17:54.662743 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:54 crc kubenswrapper[4747]: I1126 13:17:54.672504 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tv788" Nov 26 13:17:55 crc kubenswrapper[4747]: I1126 13:17:55.379975 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mr6p4" Nov 26 13:17:57 crc kubenswrapper[4747]: I1126 13:17:57.480329 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:57 crc kubenswrapper[4747]: I1126 13:17:57.492338 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67391449-89bb-423a-b690-2f60a43ccfad-metrics-certs\") pod \"network-metrics-daemon-6zzh7\" (UID: \"67391449-89bb-423a-b690-2f60a43ccfad\") " pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:17:57 crc kubenswrapper[4747]: I1126 13:17:57.528824 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6zzh7" Nov 26 13:18:03 crc kubenswrapper[4747]: I1126 13:18:03.417301 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:18:03 crc kubenswrapper[4747]: I1126 13:18:03.417605 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:18:03 crc kubenswrapper[4747]: I1126 13:18:03.456623 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:18:04 crc kubenswrapper[4747]: I1126 13:18:04.330852 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:18:04 crc kubenswrapper[4747]: I1126 13:18:04.430196 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9db4e936-6acc-46a1-ad3f-4b771000d54b","Type":"ContainerDied","Data":"1c364405c88355f58b7e1618b7b880872774aa3e795730ddb09cd114ed5faf0c"} Nov 26 13:18:04 crc kubenswrapper[4747]: I1126 13:18:04.430239 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c364405c88355f58b7e1618b7b880872774aa3e795730ddb09cd114ed5faf0c" Nov 26 13:18:04 crc kubenswrapper[4747]: I1126 13:18:04.430236 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:18:04 crc kubenswrapper[4747]: I1126 13:18:04.493684 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9db4e936-6acc-46a1-ad3f-4b771000d54b-kubelet-dir\") pod \"9db4e936-6acc-46a1-ad3f-4b771000d54b\" (UID: \"9db4e936-6acc-46a1-ad3f-4b771000d54b\") " Nov 26 13:18:04 crc kubenswrapper[4747]: I1126 13:18:04.493756 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db4e936-6acc-46a1-ad3f-4b771000d54b-kube-api-access\") pod \"9db4e936-6acc-46a1-ad3f-4b771000d54b\" (UID: \"9db4e936-6acc-46a1-ad3f-4b771000d54b\") " Nov 26 13:18:04 crc kubenswrapper[4747]: I1126 13:18:04.493851 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9db4e936-6acc-46a1-ad3f-4b771000d54b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9db4e936-6acc-46a1-ad3f-4b771000d54b" (UID: "9db4e936-6acc-46a1-ad3f-4b771000d54b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:18:04 crc kubenswrapper[4747]: I1126 13:18:04.494032 4747 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9db4e936-6acc-46a1-ad3f-4b771000d54b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:18:04 crc kubenswrapper[4747]: I1126 13:18:04.498589 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db4e936-6acc-46a1-ad3f-4b771000d54b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9db4e936-6acc-46a1-ad3f-4b771000d54b" (UID: "9db4e936-6acc-46a1-ad3f-4b771000d54b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:18:04 crc kubenswrapper[4747]: I1126 13:18:04.595104 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db4e936-6acc-46a1-ad3f-4b771000d54b-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:18:15 crc kubenswrapper[4747]: I1126 13:18:15.488077 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpslg" Nov 26 13:18:22 crc kubenswrapper[4747]: I1126 13:18:22.108022 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:18:22 crc kubenswrapper[4747]: E1126 13:18:22.543697 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 26 13:18:22 crc kubenswrapper[4747]: E1126 13:18:22.543854 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s54tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6s4vv_openshift-marketplace(43888409-bce4-40b7-bd9c-4c505b3929b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:18:22 crc kubenswrapper[4747]: E1126 13:18:22.545038 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6s4vv" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" Nov 26 13:18:23 crc kubenswrapper[4747]: E1126 13:18:23.533770 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 26 13:18:23 crc kubenswrapper[4747]: E1126 13:18:23.534348 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjcp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-94gg7_openshift-marketplace(a86b0da3-17e9-4b7f-a54b-3b54c8f7a906): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:18:23 crc kubenswrapper[4747]: E1126 13:18:23.535601 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-94gg7" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" Nov 26 13:18:27 crc kubenswrapper[4747]: E1126 13:18:27.433226 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-94gg7" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" Nov 26 13:18:27 crc kubenswrapper[4747]: E1126 13:18:27.433495 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6s4vv" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" Nov 26 13:18:27 crc kubenswrapper[4747]: E1126 13:18:27.587211 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 26 13:18:27 crc kubenswrapper[4747]: E1126 13:18:27.587396 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgwn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hf2r6_openshift-marketplace(e56046c3-771a-4f54-afba-59f160f1e415): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:18:27 crc kubenswrapper[4747]: E1126 13:18:27.588632 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hf2r6" podUID="e56046c3-771a-4f54-afba-59f160f1e415" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.557179 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.558325 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2cjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lpscr_openshift-marketplace(a8e66af2-1f24-47a6-9315-4ac97f474115): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.559641 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lpscr" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.578245 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lpscr" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.579530 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hf2r6" podUID="e56046c3-771a-4f54-afba-59f160f1e415" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.622466 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.622699 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5vhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4f7qf_openshift-marketplace(af471b2c-feb1-40af-bb70-4b41459277c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.623875 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4f7qf" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" Nov 26 13:18:28 crc kubenswrapper[4747]: I1126 13:18:28.647036 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6zzh7"] Nov 26 13:18:28 crc kubenswrapper[4747]: W1126 13:18:28.657243 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67391449_89bb_423a_b690_2f60a43ccfad.slice/crio-d25f17d607ed93962ced58a721e8250b94b576612473420a3b195834e90d6196 WatchSource:0}: Error finding container d25f17d607ed93962ced58a721e8250b94b576612473420a3b195834e90d6196: Status 404 returned error can't find the container with id d25f17d607ed93962ced58a721e8250b94b576612473420a3b195834e90d6196 Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.685923 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.686771 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lftg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rqbp5_openshift-marketplace(4d0101a0-a045-41c3-8387-7a84e8236d65): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.687979 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rqbp5" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.986482 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.986700 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxb6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cwrtn_openshift-marketplace(12cb1fe3-c93c-4a2b-b13e-c660d9b34012): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:18:28 crc kubenswrapper[4747]: E1126 13:18:28.987962 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cwrtn" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" Nov 26 13:18:29 crc kubenswrapper[4747]: I1126 13:18:29.584299 4747 generic.go:334] "Generic (PLEG): container finished" podID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerID="669d678b68c0323290b3316dcf2b6a97c08ed5ee21f7f65c06f5b17108ed6861" exitCode=0 Nov 26 13:18:29 crc kubenswrapper[4747]: I1126 13:18:29.584438 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pxjc" event={"ID":"7b88059c-90c0-4fb3-9981-7324f7f2ce1f","Type":"ContainerDied","Data":"669d678b68c0323290b3316dcf2b6a97c08ed5ee21f7f65c06f5b17108ed6861"} Nov 26 13:18:29 crc kubenswrapper[4747]: I1126 13:18:29.589772 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" event={"ID":"67391449-89bb-423a-b690-2f60a43ccfad","Type":"ContainerStarted","Data":"da1c17b0a3d72f9dc54eaf4f4c576a0f7856560af5634444572e826015559941"} Nov 26 13:18:29 crc kubenswrapper[4747]: I1126 13:18:29.589831 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" event={"ID":"67391449-89bb-423a-b690-2f60a43ccfad","Type":"ContainerStarted","Data":"95d62feefaa7b0da68eefff2c1a817e0bd070c51cac1096e169d86c6eedfd851"} Nov 26 13:18:29 crc kubenswrapper[4747]: I1126 13:18:29.589846 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6zzh7" event={"ID":"67391449-89bb-423a-b690-2f60a43ccfad","Type":"ContainerStarted","Data":"d25f17d607ed93962ced58a721e8250b94b576612473420a3b195834e90d6196"} Nov 26 13:18:29 crc kubenswrapper[4747]: E1126 13:18:29.591669 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4f7qf" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" Nov 26 13:18:29 crc kubenswrapper[4747]: E1126 13:18:29.594889 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cwrtn" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" Nov 26 13:18:29 crc kubenswrapper[4747]: E1126 13:18:29.602634 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rqbp5" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" Nov 26 13:18:29 crc kubenswrapper[4747]: I1126 13:18:29.630981 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6zzh7" podStartSLOduration=175.630958693 podStartE2EDuration="2m55.630958693s" podCreationTimestamp="2025-11-26 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:18:29.62417605 +0000 UTC m=+196.610487055" watchObservedRunningTime="2025-11-26 13:18:29.630958693 +0000 UTC m=+196.617269708" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.519634 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 26 13:18:30 crc kubenswrapper[4747]: E1126 13:18:30.520397 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db4e936-6acc-46a1-ad3f-4b771000d54b" containerName="pruner" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.520415 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db4e936-6acc-46a1-ad3f-4b771000d54b" containerName="pruner" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.520555 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db4e936-6acc-46a1-ad3f-4b771000d54b" containerName="pruner" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.520975 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.532446 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.533034 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.537786 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.707799 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"61bbf561-3580-4ddc-a0aa-d4f9b221f64d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.707892 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"61bbf561-3580-4ddc-a0aa-d4f9b221f64d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.809570 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"61bbf561-3580-4ddc-a0aa-d4f9b221f64d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.809661 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"61bbf561-3580-4ddc-a0aa-d4f9b221f64d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.809750 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"61bbf561-3580-4ddc-a0aa-d4f9b221f64d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.840043 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"61bbf561-3580-4ddc-a0aa-d4f9b221f64d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:18:30 crc kubenswrapper[4747]: I1126 13:18:30.880282 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:18:31 crc kubenswrapper[4747]: I1126 13:18:31.137460 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 26 13:18:31 crc kubenswrapper[4747]: I1126 13:18:31.601434 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"61bbf561-3580-4ddc-a0aa-d4f9b221f64d","Type":"ContainerStarted","Data":"c4972310bfc75c5b796d7e052cb1380f13229fbb668349876d8047394c614f08"} Nov 26 13:18:31 crc kubenswrapper[4747]: I1126 13:18:31.601811 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"61bbf561-3580-4ddc-a0aa-d4f9b221f64d","Type":"ContainerStarted","Data":"58268fd931b82597fe8e3d9e46deb22f01d1c3a8690fdcf362fdb5d65c7234b9"} Nov 26 13:18:31 crc kubenswrapper[4747]: I1126 13:18:31.604418 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pxjc" event={"ID":"7b88059c-90c0-4fb3-9981-7324f7f2ce1f","Type":"ContainerStarted","Data":"02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e"} Nov 26 13:18:31 crc kubenswrapper[4747]: I1126 13:18:31.617827 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.6178045669999999 podStartE2EDuration="1.617804567s" podCreationTimestamp="2025-11-26 13:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:18:31.616773921 +0000 UTC m=+198.603084936" watchObservedRunningTime="2025-11-26 13:18:31.617804567 +0000 UTC m=+198.604115602" Nov 26 13:18:31 crc kubenswrapper[4747]: I1126 13:18:31.638467 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5pxjc" podStartSLOduration=4.190342976 podStartE2EDuration="51.638442344s" podCreationTimestamp="2025-11-26 13:17:40 +0000 UTC" firstStartedPulling="2025-11-26 13:17:42.997996598 +0000 UTC m=+149.984307623" lastFinishedPulling="2025-11-26 13:18:30.446095976 +0000 UTC m=+197.432406991" observedRunningTime="2025-11-26 13:18:31.63552614 +0000 UTC m=+198.621837155" watchObservedRunningTime="2025-11-26 13:18:31.638442344 +0000 UTC m=+198.624753359" Nov 26 13:18:32 crc kubenswrapper[4747]: I1126 13:18:32.614118 4747 generic.go:334] "Generic (PLEG): container finished" podID="61bbf561-3580-4ddc-a0aa-d4f9b221f64d" containerID="c4972310bfc75c5b796d7e052cb1380f13229fbb668349876d8047394c614f08" exitCode=0 Nov 26 13:18:32 crc kubenswrapper[4747]: I1126 13:18:32.614201 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"61bbf561-3580-4ddc-a0aa-d4f9b221f64d","Type":"ContainerDied","Data":"c4972310bfc75c5b796d7e052cb1380f13229fbb668349876d8047394c614f08"} Nov 26 13:18:33 crc kubenswrapper[4747]: I1126 13:18:33.417553 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:18:33 crc kubenswrapper[4747]: I1126 13:18:33.417992 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:18:33 crc kubenswrapper[4747]: I1126 13:18:33.847692 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:18:33 crc kubenswrapper[4747]: I1126 13:18:33.954963 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kube-api-access\") pod \"61bbf561-3580-4ddc-a0aa-d4f9b221f64d\" (UID: \"61bbf561-3580-4ddc-a0aa-d4f9b221f64d\") " Nov 26 13:18:33 crc kubenswrapper[4747]: I1126 13:18:33.955031 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kubelet-dir\") pod \"61bbf561-3580-4ddc-a0aa-d4f9b221f64d\" (UID: \"61bbf561-3580-4ddc-a0aa-d4f9b221f64d\") " Nov 26 13:18:33 crc kubenswrapper[4747]: I1126 13:18:33.955099 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "61bbf561-3580-4ddc-a0aa-d4f9b221f64d" (UID: "61bbf561-3580-4ddc-a0aa-d4f9b221f64d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:18:33 crc kubenswrapper[4747]: I1126 13:18:33.955293 4747 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:18:33 crc kubenswrapper[4747]: I1126 13:18:33.964207 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "61bbf561-3580-4ddc-a0aa-d4f9b221f64d" (UID: "61bbf561-3580-4ddc-a0aa-d4f9b221f64d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:18:34 crc kubenswrapper[4747]: I1126 13:18:34.056568 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61bbf561-3580-4ddc-a0aa-d4f9b221f64d-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:18:34 crc kubenswrapper[4747]: I1126 13:18:34.626102 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"61bbf561-3580-4ddc-a0aa-d4f9b221f64d","Type":"ContainerDied","Data":"58268fd931b82597fe8e3d9e46deb22f01d1c3a8690fdcf362fdb5d65c7234b9"} Nov 26 13:18:34 crc kubenswrapper[4747]: I1126 13:18:34.626162 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58268fd931b82597fe8e3d9e46deb22f01d1c3a8690fdcf362fdb5d65c7234b9" Nov 26 13:18:34 crc kubenswrapper[4747]: I1126 13:18:34.626186 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.314723 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 26 13:18:37 crc kubenswrapper[4747]: E1126 13:18:37.315284 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bbf561-3580-4ddc-a0aa-d4f9b221f64d" containerName="pruner" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.315297 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bbf561-3580-4ddc-a0aa-d4f9b221f64d" containerName="pruner" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.315387 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="61bbf561-3580-4ddc-a0aa-d4f9b221f64d" containerName="pruner" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.315739 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.320192 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.320415 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.326436 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.502293 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-var-lock\") pod \"installer-9-crc\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.502378 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.502409 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kube-api-access\") pod \"installer-9-crc\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.603266 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.603324 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kube-api-access\") pod \"installer-9-crc\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.603400 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-var-lock\") pod \"installer-9-crc\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.603440 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.603496 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-var-lock\") pod \"installer-9-crc\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.634920 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kube-api-access\") pod \"installer-9-crc\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:37 crc kubenswrapper[4747]: I1126 13:18:37.648579 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:18:38 crc kubenswrapper[4747]: I1126 13:18:38.076010 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 26 13:18:38 crc kubenswrapper[4747]: W1126 13:18:38.086702 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf4052850_78ba_4b9b_b0cd_b5608e621a2c.slice/crio-57401070c475d6e8512f92e5b488010b9bb36df7a44d0f35a1185e00798fa8f4 WatchSource:0}: Error finding container 57401070c475d6e8512f92e5b488010b9bb36df7a44d0f35a1185e00798fa8f4: Status 404 returned error can't find the container with id 57401070c475d6e8512f92e5b488010b9bb36df7a44d0f35a1185e00798fa8f4 Nov 26 13:18:38 crc kubenswrapper[4747]: I1126 13:18:38.645956 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f4052850-78ba-4b9b-b0cd-b5608e621a2c","Type":"ContainerStarted","Data":"57401070c475d6e8512f92e5b488010b9bb36df7a44d0f35a1185e00798fa8f4"} Nov 26 13:18:39 crc kubenswrapper[4747]: I1126 13:18:39.651230 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f4052850-78ba-4b9b-b0cd-b5608e621a2c","Type":"ContainerStarted","Data":"220dfa7196fcec3a4e097c4cf7ce8f68d6e6547be196b63a1df44f4839978cc2"} Nov 26 13:18:39 crc kubenswrapper[4747]: I1126 13:18:39.671521 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.6715001579999997 podStartE2EDuration="2.671500158s" podCreationTimestamp="2025-11-26 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:18:39.668177813 +0000 UTC m=+206.654488838" watchObservedRunningTime="2025-11-26 13:18:39.671500158 +0000 UTC m=+206.657811173" Nov 26 13:18:41 crc kubenswrapper[4747]: I1126 13:18:41.450227 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:18:41 crc kubenswrapper[4747]: I1126 13:18:41.450780 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:18:42 crc kubenswrapper[4747]: I1126 13:18:42.455789 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:18:42 crc kubenswrapper[4747]: I1126 13:18:42.519439 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:18:44 crc kubenswrapper[4747]: I1126 13:18:44.688170 4747 generic.go:334] "Generic (PLEG): container finished" podID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerID="933656c5936d287eb1bb94f09d1293a2809d51957ab61394ec885e354138e455" exitCode=0 Nov 26 13:18:44 crc kubenswrapper[4747]: I1126 13:18:44.688489 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s4vv" event={"ID":"43888409-bce4-40b7-bd9c-4c505b3929b0","Type":"ContainerDied","Data":"933656c5936d287eb1bb94f09d1293a2809d51957ab61394ec885e354138e455"} Nov 26 13:18:46 crc kubenswrapper[4747]: I1126 13:18:46.036821 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5pxjc"] Nov 26 13:18:46 crc kubenswrapper[4747]: I1126 13:18:46.037546 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5pxjc" podUID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerName="registry-server" containerID="cri-o://02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e" gracePeriod=2 Nov 26 13:18:47 crc kubenswrapper[4747]: I1126 13:18:47.709559 4747 generic.go:334] "Generic (PLEG): container finished" podID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerID="02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e" exitCode=0 Nov 26 13:18:47 crc kubenswrapper[4747]: I1126 13:18:47.709591 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pxjc" event={"ID":"7b88059c-90c0-4fb3-9981-7324f7f2ce1f","Type":"ContainerDied","Data":"02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e"} Nov 26 13:18:51 crc kubenswrapper[4747]: E1126 13:18:51.451996 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e is running failed: container process not found" containerID="02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e" cmd=["grpc_health_probe","-addr=:50051"] Nov 26 13:18:51 crc kubenswrapper[4747]: E1126 13:18:51.453045 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e is running failed: container process not found" containerID="02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e" cmd=["grpc_health_probe","-addr=:50051"] Nov 26 13:18:51 crc kubenswrapper[4747]: E1126 13:18:51.453623 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e is running failed: container process not found" containerID="02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e" cmd=["grpc_health_probe","-addr=:50051"] Nov 26 13:18:51 crc kubenswrapper[4747]: E1126 13:18:51.453701 4747 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-5pxjc" podUID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerName="registry-server" Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.239596 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.336078 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-utilities\") pod \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.336158 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-catalog-content\") pod \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.336227 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r9vk\" (UniqueName: \"kubernetes.io/projected/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-kube-api-access-6r9vk\") pod \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\" (UID: \"7b88059c-90c0-4fb3-9981-7324f7f2ce1f\") " Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.337648 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-utilities" (OuterVolumeSpecName: "utilities") pod "7b88059c-90c0-4fb3-9981-7324f7f2ce1f" (UID: "7b88059c-90c0-4fb3-9981-7324f7f2ce1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.343254 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-kube-api-access-6r9vk" (OuterVolumeSpecName: "kube-api-access-6r9vk") pod "7b88059c-90c0-4fb3-9981-7324f7f2ce1f" (UID: "7b88059c-90c0-4fb3-9981-7324f7f2ce1f"). InnerVolumeSpecName "kube-api-access-6r9vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.389421 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b88059c-90c0-4fb3-9981-7324f7f2ce1f" (UID: "7b88059c-90c0-4fb3-9981-7324f7f2ce1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.437897 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.437949 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.437971 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r9vk\" (UniqueName: \"kubernetes.io/projected/7b88059c-90c0-4fb3-9981-7324f7f2ce1f-kube-api-access-6r9vk\") on node \"crc\" DevicePath \"\"" Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.761276 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pxjc" event={"ID":"7b88059c-90c0-4fb3-9981-7324f7f2ce1f","Type":"ContainerDied","Data":"5693c03d32fb3685c3c6393542ff62ec6b8be17f625b6eae09ed11b82d206840"} Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.761355 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pxjc" Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.761377 4747 scope.go:117] "RemoveContainer" containerID="02f9e2ecff9aa3e7d3c19e92bba1499fa4362a1bb1a20c4d136fa4af74e4d50e" Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.813531 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5pxjc"] Nov 26 13:18:55 crc kubenswrapper[4747]: I1126 13:18:55.813596 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5pxjc"] Nov 26 13:18:55 crc kubenswrapper[4747]: E1126 13:18:55.868692 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b88059c_90c0_4fb3_9981_7324f7f2ce1f.slice\": RecentStats: unable to find data in memory cache]" Nov 26 13:18:57 crc kubenswrapper[4747]: I1126 13:18:57.809519 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" path="/var/lib/kubelet/pods/7b88059c-90c0-4fb3-9981-7324f7f2ce1f/volumes" Nov 26 13:19:00 crc kubenswrapper[4747]: I1126 13:19:00.771110 4747 scope.go:117] "RemoveContainer" containerID="669d678b68c0323290b3316dcf2b6a97c08ed5ee21f7f65c06f5b17108ed6861" Nov 26 13:19:01 crc kubenswrapper[4747]: I1126 13:19:01.205746 4747 scope.go:117] "RemoveContainer" containerID="ca3f4d8e996a9eedfa8aaa3bec406b732bf8720fea549e4e373b9ceec25aae78" Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.827027 4747 generic.go:334] "Generic (PLEG): container finished" podID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerID="a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332" exitCode=0 Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.827116 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwrtn" event={"ID":"12cb1fe3-c93c-4a2b-b13e-c660d9b34012","Type":"ContainerDied","Data":"a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332"} Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.830276 4747 generic.go:334] "Generic (PLEG): container finished" podID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerID="331688a7a890a58ce52abde05df70dd3b0e1f200c8ca94abe3ea46518f67be3d" exitCode=0 Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.830351 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqbp5" event={"ID":"4d0101a0-a045-41c3-8387-7a84e8236d65","Type":"ContainerDied","Data":"331688a7a890a58ce52abde05df70dd3b0e1f200c8ca94abe3ea46518f67be3d"} Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.834733 4747 generic.go:334] "Generic (PLEG): container finished" podID="e56046c3-771a-4f54-afba-59f160f1e415" containerID="499a6e3b9c91e3f55c4658ac75cb61288245a1e1fcfe9bc99c08d4e4af173d7d" exitCode=0 Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.834779 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hf2r6" event={"ID":"e56046c3-771a-4f54-afba-59f160f1e415","Type":"ContainerDied","Data":"499a6e3b9c91e3f55c4658ac75cb61288245a1e1fcfe9bc99c08d4e4af173d7d"} Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.839547 4747 generic.go:334] "Generic (PLEG): container finished" podID="af471b2c-feb1-40af-bb70-4b41459277c3" containerID="af836cf0a7e51d8f703d7a056334992e8d18af71cbb19563d9311e5822712e39" exitCode=0 Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.839668 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f7qf" event={"ID":"af471b2c-feb1-40af-bb70-4b41459277c3","Type":"ContainerDied","Data":"af836cf0a7e51d8f703d7a056334992e8d18af71cbb19563d9311e5822712e39"} Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.845219 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s4vv" event={"ID":"43888409-bce4-40b7-bd9c-4c505b3929b0","Type":"ContainerStarted","Data":"4861e583f15f0a681eb5b84d073ae394c2648fc575e2b60f085000e7aad749db"} Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.853569 4747 generic.go:334] "Generic (PLEG): container finished" podID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerID="70d37437019e5634aef69d242bb8526903cef8cbbc280accecd57428d978f128" exitCode=0 Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.853648 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94gg7" event={"ID":"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906","Type":"ContainerDied","Data":"70d37437019e5634aef69d242bb8526903cef8cbbc280accecd57428d978f128"} Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.865120 4747 generic.go:334] "Generic (PLEG): container finished" podID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerID="3b444f42235ce7c04efdfc4fdc24878de6ac800afcc4260ff11fba42a0768989" exitCode=0 Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.865228 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpscr" event={"ID":"a8e66af2-1f24-47a6-9315-4ac97f474115","Type":"ContainerDied","Data":"3b444f42235ce7c04efdfc4fdc24878de6ac800afcc4260ff11fba42a0768989"} Nov 26 13:19:02 crc kubenswrapper[4747]: I1126 13:19:02.976706 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6s4vv" podStartSLOduration=4.419965037 podStartE2EDuration="1m22.976687238s" podCreationTimestamp="2025-11-26 13:17:40 +0000 UTC" firstStartedPulling="2025-11-26 13:17:43.007723422 +0000 UTC m=+149.994034437" lastFinishedPulling="2025-11-26 13:19:01.564445583 +0000 UTC m=+228.550756638" observedRunningTime="2025-11-26 13:19:02.973823305 +0000 UTC m=+229.960134340" watchObservedRunningTime="2025-11-26 13:19:02.976687238 +0000 UTC m=+229.962998273" Nov 26 13:19:03 crc kubenswrapper[4747]: I1126 13:19:03.418235 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:19:03 crc kubenswrapper[4747]: I1126 13:19:03.418317 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:19:03 crc kubenswrapper[4747]: I1126 13:19:03.418381 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:19:03 crc kubenswrapper[4747]: I1126 13:19:03.419276 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023"} pod="openshift-machine-config-operator/machine-config-daemon-hjc55" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:19:03 crc kubenswrapper[4747]: I1126 13:19:03.419438 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" containerID="cri-o://6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023" gracePeriod=600 Nov 26 13:19:03 crc kubenswrapper[4747]: I1126 13:19:03.884839 4747 generic.go:334] "Generic (PLEG): container finished" podID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerID="6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023" exitCode=0 Nov 26 13:19:03 crc kubenswrapper[4747]: I1126 13:19:03.884890 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerDied","Data":"6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023"} Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.359406 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ksg5q"] Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.891612 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpscr" event={"ID":"a8e66af2-1f24-47a6-9315-4ac97f474115","Type":"ContainerStarted","Data":"032fca7f473a0a1345c647b2cf31f478a6e15adc4292e7ddea6a727cf0e7784e"} Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.893761 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqbp5" event={"ID":"4d0101a0-a045-41c3-8387-7a84e8236d65","Type":"ContainerStarted","Data":"0c0ff5a2b0a6151346be6928a092ce684f9b131155dd0b470ba747b8a1341814"} Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.895359 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hf2r6" event={"ID":"e56046c3-771a-4f54-afba-59f160f1e415","Type":"ContainerStarted","Data":"403921d7cbdc4b0e8973ce49b40a023d0756e83c7cc656121199980438b46dc9"} Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.897458 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f7qf" event={"ID":"af471b2c-feb1-40af-bb70-4b41459277c3","Type":"ContainerStarted","Data":"8f3e255b0ef929e08ea9d3c725e32c4a1a784d36d0f6e3460765fbcb13d182f4"} Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.899440 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"11b7db12d54852688d92cc54c020e09696dea5ae8e3f8c5325e53a455b249bb1"} Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.901640 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwrtn" event={"ID":"12cb1fe3-c93c-4a2b-b13e-c660d9b34012","Type":"ContainerStarted","Data":"a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4"} Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.920480 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lpscr" podStartSLOduration=3.005884991 podStartE2EDuration="1m22.920461212s" podCreationTimestamp="2025-11-26 13:17:42 +0000 UTC" firstStartedPulling="2025-11-26 13:17:44.036556679 +0000 UTC m=+151.022867694" lastFinishedPulling="2025-11-26 13:19:03.95113289 +0000 UTC m=+230.937443915" observedRunningTime="2025-11-26 13:19:04.917415534 +0000 UTC m=+231.903726559" watchObservedRunningTime="2025-11-26 13:19:04.920461212 +0000 UTC m=+231.906772227" Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.951840 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4f7qf" podStartSLOduration=3.998253574 podStartE2EDuration="1m24.951825633s" podCreationTimestamp="2025-11-26 13:17:40 +0000 UTC" firstStartedPulling="2025-11-26 13:17:42.962995928 +0000 UTC m=+149.949306943" lastFinishedPulling="2025-11-26 13:19:03.916567977 +0000 UTC m=+230.902879002" observedRunningTime="2025-11-26 13:19:04.949463423 +0000 UTC m=+231.935774438" watchObservedRunningTime="2025-11-26 13:19:04.951825633 +0000 UTC m=+231.938136648" Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.972622 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hf2r6" podStartSLOduration=2.6092146720000002 podStartE2EDuration="1m21.972603924s" podCreationTimestamp="2025-11-26 13:17:43 +0000 UTC" firstStartedPulling="2025-11-26 13:17:45.158228611 +0000 UTC m=+152.144539626" lastFinishedPulling="2025-11-26 13:19:04.521617863 +0000 UTC m=+231.507928878" observedRunningTime="2025-11-26 13:19:04.969633878 +0000 UTC m=+231.955944903" watchObservedRunningTime="2025-11-26 13:19:04.972603924 +0000 UTC m=+231.958914949" Nov 26 13:19:04 crc kubenswrapper[4747]: I1126 13:19:04.989047 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rqbp5" podStartSLOduration=2.828015185 podStartE2EDuration="1m22.989019623s" podCreationTimestamp="2025-11-26 13:17:42 +0000 UTC" firstStartedPulling="2025-11-26 13:17:44.041030402 +0000 UTC m=+151.027341417" lastFinishedPulling="2025-11-26 13:19:04.20203484 +0000 UTC m=+231.188345855" observedRunningTime="2025-11-26 13:19:04.988364567 +0000 UTC m=+231.974675582" watchObservedRunningTime="2025-11-26 13:19:04.989019623 +0000 UTC m=+231.975330638" Nov 26 13:19:05 crc kubenswrapper[4747]: I1126 13:19:05.007106 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cwrtn" podStartSLOduration=4.499193526 podStartE2EDuration="1m22.007088975s" podCreationTimestamp="2025-11-26 13:17:43 +0000 UTC" firstStartedPulling="2025-11-26 13:17:46.286549759 +0000 UTC m=+153.272860774" lastFinishedPulling="2025-11-26 13:19:03.794445218 +0000 UTC m=+230.780756223" observedRunningTime="2025-11-26 13:19:05.006023798 +0000 UTC m=+231.992334813" watchObservedRunningTime="2025-11-26 13:19:05.007088975 +0000 UTC m=+231.993399990" Nov 26 13:19:05 crc kubenswrapper[4747]: I1126 13:19:05.910219 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94gg7" event={"ID":"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906","Type":"ContainerStarted","Data":"c0fd4d1d8edda4f0d49cfd4682ad698a3906fbf370bfeea3909ef5f1b50c8738"} Nov 26 13:19:05 crc kubenswrapper[4747]: I1126 13:19:05.937611 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-94gg7" podStartSLOduration=3.862327278 podStartE2EDuration="1m25.937591654s" podCreationTimestamp="2025-11-26 13:17:40 +0000 UTC" firstStartedPulling="2025-11-26 13:17:42.969093141 +0000 UTC m=+149.955404156" lastFinishedPulling="2025-11-26 13:19:05.044357517 +0000 UTC m=+232.030668532" observedRunningTime="2025-11-26 13:19:05.936943457 +0000 UTC m=+232.923254472" watchObservedRunningTime="2025-11-26 13:19:05.937591654 +0000 UTC m=+232.923902659" Nov 26 13:19:10 crc kubenswrapper[4747]: I1126 13:19:10.812015 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:19:10 crc kubenswrapper[4747]: I1126 13:19:10.812849 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:19:10 crc kubenswrapper[4747]: I1126 13:19:10.872166 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:19:10 crc kubenswrapper[4747]: I1126 13:19:10.987011 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:19:11 crc kubenswrapper[4747]: I1126 13:19:11.271435 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:19:11 crc kubenswrapper[4747]: I1126 13:19:11.271731 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:19:11 crc kubenswrapper[4747]: I1126 13:19:11.325874 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:19:11 crc kubenswrapper[4747]: I1126 13:19:11.440434 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:19:11 crc kubenswrapper[4747]: I1126 13:19:11.440522 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:19:11 crc kubenswrapper[4747]: I1126 13:19:11.498397 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:19:12 crc kubenswrapper[4747]: I1126 13:19:12.007110 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:19:12 crc kubenswrapper[4747]: I1126 13:19:12.016755 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:19:12 crc kubenswrapper[4747]: I1126 13:19:12.836465 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:19:12 crc kubenswrapper[4747]: I1126 13:19:12.836534 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:19:12 crc kubenswrapper[4747]: I1126 13:19:12.906440 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:19:13 crc kubenswrapper[4747]: I1126 13:19:13.018709 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:19:13 crc kubenswrapper[4747]: I1126 13:19:13.116906 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:19:13 crc kubenswrapper[4747]: I1126 13:19:13.119564 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:19:13 crc kubenswrapper[4747]: I1126 13:19:13.159391 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:19:13 crc kubenswrapper[4747]: I1126 13:19:13.675348 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94gg7"] Nov 26 13:19:13 crc kubenswrapper[4747]: I1126 13:19:13.960986 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-94gg7" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerName="registry-server" containerID="cri-o://c0fd4d1d8edda4f0d49cfd4682ad698a3906fbf370bfeea3909ef5f1b50c8738" gracePeriod=2 Nov 26 13:19:14 crc kubenswrapper[4747]: I1126 13:19:13.999941 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:19:14 crc kubenswrapper[4747]: I1126 13:19:14.000434 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:19:14 crc kubenswrapper[4747]: I1126 13:19:14.032815 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:19:14 crc kubenswrapper[4747]: I1126 13:19:14.046619 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:19:14 crc kubenswrapper[4747]: I1126 13:19:14.340418 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:19:14 crc kubenswrapper[4747]: I1126 13:19:14.341700 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:19:14 crc kubenswrapper[4747]: I1126 13:19:14.405605 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:19:15 crc kubenswrapper[4747]: I1126 13:19:15.034917 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:19:15 crc kubenswrapper[4747]: I1126 13:19:15.036031 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:19:15 crc kubenswrapper[4747]: I1126 13:19:15.977234 4747 generic.go:334] "Generic (PLEG): container finished" podID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerID="c0fd4d1d8edda4f0d49cfd4682ad698a3906fbf370bfeea3909ef5f1b50c8738" exitCode=0 Nov 26 13:19:15 crc kubenswrapper[4747]: I1126 13:19:15.977392 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94gg7" event={"ID":"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906","Type":"ContainerDied","Data":"c0fd4d1d8edda4f0d49cfd4682ad698a3906fbf370bfeea3909ef5f1b50c8738"} Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.080202 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpscr"] Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.487606 4747 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 26 13:19:16 crc kubenswrapper[4747]: E1126 13:19:16.487894 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerName="registry-server" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.487915 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerName="registry-server" Nov 26 13:19:16 crc kubenswrapper[4747]: E1126 13:19:16.487933 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerName="extract-utilities" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.487944 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerName="extract-utilities" Nov 26 13:19:16 crc kubenswrapper[4747]: E1126 13:19:16.487956 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerName="extract-content" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.487967 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerName="extract-content" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.488135 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b88059c-90c0-4fb3-9981-7324f7f2ce1f" containerName="registry-server" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.488614 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.488956 4747 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.489442 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c" gracePeriod=15 Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.489657 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d" gracePeriod=15 Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.489748 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e" gracePeriod=15 Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.489809 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e" gracePeriod=15 Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.489866 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606" gracePeriod=15 Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.500665 4747 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 13:19:16 crc kubenswrapper[4747]: E1126 13:19:16.500936 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.500950 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:19:16 crc kubenswrapper[4747]: E1126 13:19:16.500967 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.500977 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:19:16 crc kubenswrapper[4747]: E1126 13:19:16.500993 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501001 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 26 13:19:16 crc kubenswrapper[4747]: E1126 13:19:16.501017 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501026 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 26 13:19:16 crc kubenswrapper[4747]: E1126 13:19:16.501037 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501046 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 26 13:19:16 crc kubenswrapper[4747]: E1126 13:19:16.501079 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501087 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 26 13:19:16 crc kubenswrapper[4747]: E1126 13:19:16.501100 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501108 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501227 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501248 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501261 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501272 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501285 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.501519 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.598797 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.599154 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.599182 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.599206 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.599226 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.599253 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.599289 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.599315 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.670963 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.671743 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.672443 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.700362 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.700416 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.700869 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.700950 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.700973 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.700953 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.700994 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.701019 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.701076 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.701039 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.701149 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.701182 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.701199 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.701222 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.701292 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.701373 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.802472 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-utilities\") pod \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.803248 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjcp6\" (UniqueName: \"kubernetes.io/projected/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-kube-api-access-gjcp6\") pod \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.803413 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-catalog-content\") pod \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\" (UID: \"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906\") " Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.805254 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-utilities" (OuterVolumeSpecName: "utilities") pod "a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" (UID: "a86b0da3-17e9-4b7f-a54b-3b54c8f7a906"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.811828 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-kube-api-access-gjcp6" (OuterVolumeSpecName: "kube-api-access-gjcp6") pod "a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" (UID: "a86b0da3-17e9-4b7f-a54b-3b54c8f7a906"). InnerVolumeSpecName "kube-api-access-gjcp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.905907 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.905955 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjcp6\" (UniqueName: \"kubernetes.io/projected/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-kube-api-access-gjcp6\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.972162 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" (UID: "a86b0da3-17e9-4b7f-a54b-3b54c8f7a906"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.987623 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94gg7" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.987611 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94gg7" event={"ID":"a86b0da3-17e9-4b7f-a54b-3b54c8f7a906","Type":"ContainerDied","Data":"4fc7eaf98e5040a827cb142c93457cdffde8143f4583193eb17346489184c5d3"} Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.987803 4747 scope.go:117] "RemoveContainer" containerID="c0fd4d1d8edda4f0d49cfd4682ad698a3906fbf370bfeea3909ef5f1b50c8738" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.988843 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.989689 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.993741 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 13:19:16 crc kubenswrapper[4747]: I1126 13:19:16.999839 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.001243 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606" exitCode=2 Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.001848 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lpscr" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerName="registry-server" containerID="cri-o://032fca7f473a0a1345c647b2cf31f478a6e15adc4292e7ddea6a727cf0e7784e" gracePeriod=2 Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.002757 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.003030 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.003402 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:17 crc kubenswrapper[4747]: E1126 13:19:17.003334 4747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.142:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-lpscr.187b910b5aec5946 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-lpscr,UID:a8e66af2-1f24-47a6-9315-4ac97f474115,APIVersion:v1,ResourceVersion:28458,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-26 13:19:17.001787718 +0000 UTC m=+243.988098763,LastTimestamp:2025-11-26 13:19:17.001787718 +0000 UTC m=+243.988098763,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.007231 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.008341 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.008858 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.009277 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.014390 4747 scope.go:117] "RemoveContainer" containerID="70d37437019e5634aef69d242bb8526903cef8cbbc280accecd57428d978f128" Nov 26 13:19:17 crc kubenswrapper[4747]: I1126 13:19:17.050724 4747 scope.go:117] "RemoveContainer" containerID="47a0a85a913cd92d7eccdfc78f48ef0100cfb44bc4faa41c4de772bcf8a2df82" Nov 26 13:19:17 crc kubenswrapper[4747]: E1126 13:19:17.876165 4747 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" volumeName="registry-storage" Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.012904 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" containerID="220dfa7196fcec3a4e097c4cf7ce8f68d6e6547be196b63a1df44f4839978cc2" exitCode=0 Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.013027 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f4052850-78ba-4b9b-b0cd-b5608e621a2c","Type":"ContainerDied","Data":"220dfa7196fcec3a4e097c4cf7ce8f68d6e6547be196b63a1df44f4839978cc2"} Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.013946 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.014779 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.015520 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.019412 4747 generic.go:334] "Generic (PLEG): container finished" podID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerID="032fca7f473a0a1345c647b2cf31f478a6e15adc4292e7ddea6a727cf0e7784e" exitCode=0 Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.019493 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpscr" event={"ID":"a8e66af2-1f24-47a6-9315-4ac97f474115","Type":"ContainerDied","Data":"032fca7f473a0a1345c647b2cf31f478a6e15adc4292e7ddea6a727cf0e7784e"} Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.022607 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.024472 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.025702 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d" exitCode=0 Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.025740 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e" exitCode=0 Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.025753 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e" exitCode=0 Nov 26 13:19:18 crc kubenswrapper[4747]: I1126 13:19:18.025877 4747 scope.go:117] "RemoveContainer" containerID="a987e864e6dfc29422a6d3b1c6390b492c4316279f064b4347e73ab847ee3c58" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.224455 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.225437 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.226187 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.226903 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.314743 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.315415 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.316023 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.316711 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.341401 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-catalog-content\") pod \"a8e66af2-1f24-47a6-9315-4ac97f474115\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.341511 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-utilities\") pod \"a8e66af2-1f24-47a6-9315-4ac97f474115\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.341558 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2cjs\" (UniqueName: \"kubernetes.io/projected/a8e66af2-1f24-47a6-9315-4ac97f474115-kube-api-access-d2cjs\") pod \"a8e66af2-1f24-47a6-9315-4ac97f474115\" (UID: \"a8e66af2-1f24-47a6-9315-4ac97f474115\") " Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.342486 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-utilities" (OuterVolumeSpecName: "utilities") pod "a8e66af2-1f24-47a6-9315-4ac97f474115" (UID: "a8e66af2-1f24-47a6-9315-4ac97f474115"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.347425 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e66af2-1f24-47a6-9315-4ac97f474115-kube-api-access-d2cjs" (OuterVolumeSpecName: "kube-api-access-d2cjs") pod "a8e66af2-1f24-47a6-9315-4ac97f474115" (UID: "a8e66af2-1f24-47a6-9315-4ac97f474115"). InnerVolumeSpecName "kube-api-access-d2cjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.358884 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8e66af2-1f24-47a6-9315-4ac97f474115" (UID: "a8e66af2-1f24-47a6-9315-4ac97f474115"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.442671 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kubelet-dir\") pod \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.443217 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-var-lock\") pod \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.443337 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kube-api-access\") pod \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\" (UID: \"f4052850-78ba-4b9b-b0cd-b5608e621a2c\") " Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.443634 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f4052850-78ba-4b9b-b0cd-b5608e621a2c" (UID: "f4052850-78ba-4b9b-b0cd-b5608e621a2c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.443730 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.443973 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e66af2-1f24-47a6-9315-4ac97f474115-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.444045 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2cjs\" (UniqueName: \"kubernetes.io/projected/a8e66af2-1f24-47a6-9315-4ac97f474115-kube-api-access-d2cjs\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.444160 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-var-lock" (OuterVolumeSpecName: "var-lock") pod "f4052850-78ba-4b9b-b0cd-b5608e621a2c" (UID: "f4052850-78ba-4b9b-b0cd-b5608e621a2c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.449236 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f4052850-78ba-4b9b-b0cd-b5608e621a2c" (UID: "f4052850-78ba-4b9b-b0cd-b5608e621a2c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.545276 4747 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.545323 4747 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4052850-78ba-4b9b-b0cd-b5608e621a2c-var-lock\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:19 crc kubenswrapper[4747]: I1126 13:19:19.545336 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4052850-78ba-4b9b-b0cd-b5608e621a2c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.041983 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpscr" event={"ID":"a8e66af2-1f24-47a6-9315-4ac97f474115","Type":"ContainerDied","Data":"6e2ec24c3f4ad710f8ec410f5cb7ab3a110d0cd1ac6c461a01e671aed8722972"} Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.042062 4747 scope.go:117] "RemoveContainer" containerID="032fca7f473a0a1345c647b2cf31f478a6e15adc4292e7ddea6a727cf0e7784e" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.042232 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpscr" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.042814 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.043122 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.043430 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.046287 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.048160 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f4052850-78ba-4b9b-b0cd-b5608e621a2c","Type":"ContainerDied","Data":"57401070c475d6e8512f92e5b488010b9bb36df7a44d0f35a1185e00798fa8f4"} Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.048196 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.048196 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57401070c475d6e8512f92e5b488010b9bb36df7a44d0f35a1185e00798fa8f4" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.097889 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.098231 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.098554 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.098906 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.099152 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.099431 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.110424 4747 scope.go:117] "RemoveContainer" containerID="3b444f42235ce7c04efdfc4fdc24878de6ac800afcc4260ff11fba42a0768989" Nov 26 13:19:20 crc kubenswrapper[4747]: I1126 13:19:20.141710 4747 scope.go:117] "RemoveContainer" containerID="8b55db0ed6d48097237fceefa1611562bb239f82b8da04efe6618ee9b25bf592" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.062046 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.063576 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c" exitCode=0 Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.407575 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.409416 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.410216 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.410714 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.411199 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.411634 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.469758 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.469840 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.469904 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.469948 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.469968 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.470105 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.470509 4747 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.470541 4747 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.470558 4747 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:21 crc kubenswrapper[4747]: E1126 13:19:21.549719 4747 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.550125 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:21 crc kubenswrapper[4747]: W1126 13:19:21.583591 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-b0235b004b0331082eabf86efaf023cdf11fc83e6953850a43ed20c4ab427645 WatchSource:0}: Error finding container b0235b004b0331082eabf86efaf023cdf11fc83e6953850a43ed20c4ab427645: Status 404 returned error can't find the container with id b0235b004b0331082eabf86efaf023cdf11fc83e6953850a43ed20c4ab427645 Nov 26 13:19:21 crc kubenswrapper[4747]: I1126 13:19:21.809307 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.072664 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b0235b004b0331082eabf86efaf023cdf11fc83e6953850a43ed20c4ab427645"} Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.078778 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.079903 4747 scope.go:117] "RemoveContainer" containerID="4f7aef04535de341b9b06e343618cbff8ca3d93832df2e3074f36ac39d52280d" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.080128 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.081855 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.082650 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.084120 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.084719 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.085248 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.085950 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.086482 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.086910 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.106005 4747 scope.go:117] "RemoveContainer" containerID="bc579a8eb7e7908bb3c328153964d4c093f6dccdf8270886f217441d15f4776e" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.126691 4747 scope.go:117] "RemoveContainer" containerID="c7e657ba62b4f5d3b689a36ec28cb5450a5cd2b4a7f9d03a5a84e7edbf5b6e7e" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.154724 4747 scope.go:117] "RemoveContainer" containerID="75659faf7bafb093f7c3d2930bb95f8901a4710a1e975d821a5ecbbe38419606" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.178370 4747 scope.go:117] "RemoveContainer" containerID="9b7c78fa75dc306b1ce8bf46b8d39bad109f0fc48c7306719a722e57c7cf5e2c" Nov 26 13:19:22 crc kubenswrapper[4747]: I1126 13:19:22.203600 4747 scope.go:117] "RemoveContainer" containerID="24e196348f90f3d0cc221a84aa5a355bc756b39f8162a3bf3fb2966b688eac08" Nov 26 13:19:23 crc kubenswrapper[4747]: I1126 13:19:23.802749 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:23 crc kubenswrapper[4747]: I1126 13:19:23.804650 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:23 crc kubenswrapper[4747]: I1126 13:19:23.805386 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:23 crc kubenswrapper[4747]: I1126 13:19:23.806045 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:24 crc kubenswrapper[4747]: I1126 13:19:24.096292 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7bc00021a32ab121497bab16099ec6006c81d38525867e85e5c60f6b2acde2a5"} Nov 26 13:19:24 crc kubenswrapper[4747]: I1126 13:19:24.097181 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:24 crc kubenswrapper[4747]: E1126 13:19:24.097382 4747 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:24 crc kubenswrapper[4747]: I1126 13:19:24.097556 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:24 crc kubenswrapper[4747]: I1126 13:19:24.098132 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:24 crc kubenswrapper[4747]: E1126 13:19:24.357450 4747 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:24 crc kubenswrapper[4747]: E1126 13:19:24.357769 4747 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:24 crc kubenswrapper[4747]: E1126 13:19:24.358090 4747 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:24 crc kubenswrapper[4747]: E1126 13:19:24.358480 4747 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:24 crc kubenswrapper[4747]: E1126 13:19:24.359463 4747 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:24 crc kubenswrapper[4747]: I1126 13:19:24.359511 4747 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 26 13:19:24 crc kubenswrapper[4747]: E1126 13:19:24.359729 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="200ms" Nov 26 13:19:24 crc kubenswrapper[4747]: E1126 13:19:24.560428 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="400ms" Nov 26 13:19:24 crc kubenswrapper[4747]: E1126 13:19:24.962327 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="800ms" Nov 26 13:19:25 crc kubenswrapper[4747]: E1126 13:19:25.102294 4747 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:19:25 crc kubenswrapper[4747]: E1126 13:19:25.336972 4747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.142:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-lpscr.187b910b5aec5946 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-lpscr,UID:a8e66af2-1f24-47a6-9315-4ac97f474115,APIVersion:v1,ResourceVersion:28458,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-26 13:19:17.001787718 +0000 UTC m=+243.988098763,LastTimestamp:2025-11-26 13:19:17.001787718 +0000 UTC m=+243.988098763,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 26 13:19:25 crc kubenswrapper[4747]: E1126 13:19:25.763900 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="1.6s" Nov 26 13:19:27 crc kubenswrapper[4747]: E1126 13:19:27.365563 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="3.2s" Nov 26 13:19:29 crc kubenswrapper[4747]: I1126 13:19:29.384378 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" containerName="oauth-openshift" containerID="cri-o://0637b954dc5563ef98f843b2130e1c2fc5a7af5383dca2d04fc4e4ce5058527a" gracePeriod=15 Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.137817 4747 generic.go:334] "Generic (PLEG): container finished" podID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" containerID="0637b954dc5563ef98f843b2130e1c2fc5a7af5383dca2d04fc4e4ce5058527a" exitCode=0 Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.137969 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" event={"ID":"ec6afd64-e5b6-4851-a35e-db5a9490cdcb","Type":"ContainerDied","Data":"0637b954dc5563ef98f843b2130e1c2fc5a7af5383dca2d04fc4e4ce5058527a"} Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.146618 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.146708 4747 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc" exitCode=1 Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.146769 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc"} Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.147988 4747 scope.go:117] "RemoveContainer" containerID="17e09b57c4349e94167f6e6615bfa2a90a0fc73ae7e188f32cc02ffe039119dc" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.148216 4747 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.148427 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.148668 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.148882 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.273937 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.274655 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.275006 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.275384 4747 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.275710 4747 status_manager.go:851] "Failed to get status for pod" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ksg5q\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.276075 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388161 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-cliconfig\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388241 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-trusted-ca-bundle\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388287 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-idp-0-file-data\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388338 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-serving-cert\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388412 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-session\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388467 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-error\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388521 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frzfx\" (UniqueName: \"kubernetes.io/projected/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-kube-api-access-frzfx\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388569 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-policies\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388649 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-provider-selection\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388705 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-login\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388770 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-dir\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388830 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-router-certs\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388873 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-service-ca\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388929 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-ocp-branding-template\") pod \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\" (UID: \"ec6afd64-e5b6-4851-a35e-db5a9490cdcb\") " Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.388957 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.389377 4747 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.389545 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.389665 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.390414 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.390522 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.395974 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.396520 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.395544 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.396734 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-kube-api-access-frzfx" (OuterVolumeSpecName: "kube-api-access-frzfx") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "kube-api-access-frzfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.396884 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.397679 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.407345 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.407814 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.408189 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ec6afd64-e5b6-4851-a35e-db5a9490cdcb" (UID: "ec6afd64-e5b6-4851-a35e-db5a9490cdcb"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.490904 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frzfx\" (UniqueName: \"kubernetes.io/projected/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-kube-api-access-frzfx\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.490967 4747 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.490989 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.491014 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.491037 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.491080 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.491102 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.491122 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.491140 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.491159 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.491179 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.491198 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: I1126 13:19:30.491217 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec6afd64-e5b6-4851-a35e-db5a9490cdcb-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 26 13:19:30 crc kubenswrapper[4747]: E1126 13:19:30.566435 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="6.4s" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.157003 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" event={"ID":"ec6afd64-e5b6-4851-a35e-db5a9490cdcb","Type":"ContainerDied","Data":"611f9ce6173846163f6b3af384bfe092c0ca786d3da660ecb01d983ddb114a34"} Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.157082 4747 scope.go:117] "RemoveContainer" containerID="0637b954dc5563ef98f843b2130e1c2fc5a7af5383dca2d04fc4e4ce5058527a" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.157115 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.158443 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.158903 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.159480 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.159895 4747 status_manager.go:851] "Failed to get status for pod" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ksg5q\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.160396 4747 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.162691 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.162811 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2310a67f48520e14a12f9d84504bd01a0fb4f3bb688dc177c8ad487f62f9c893"} Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.163648 4747 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.164123 4747 status_manager.go:851] "Failed to get status for pod" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ksg5q\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.164574 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.165091 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.165679 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.184337 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.184942 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.185551 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.186006 4747 status_manager.go:851] "Failed to get status for pod" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ksg5q\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.186395 4747 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.797400 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.799400 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.800151 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.800736 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.801378 4747 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.801981 4747 status_manager.go:851] "Failed to get status for pod" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ksg5q\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.827300 4747 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="722a046a-0d41-469c-ac7d-f58624c825aa" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.827357 4747 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="722a046a-0d41-469c-ac7d-f58624c825aa" Nov 26 13:19:31 crc kubenswrapper[4747]: E1126 13:19:31.828123 4747 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:31 crc kubenswrapper[4747]: I1126 13:19:31.828726 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:32 crc kubenswrapper[4747]: I1126 13:19:32.174774 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d314f6f376aad8d312c3f07d62040523c36f99096e26ffc6b72fa7f9be8444bc"} Nov 26 13:19:32 crc kubenswrapper[4747]: I1126 13:19:32.295961 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:19:32 crc kubenswrapper[4747]: I1126 13:19:32.303239 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:19:32 crc kubenswrapper[4747]: I1126 13:19:32.304243 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:32 crc kubenswrapper[4747]: I1126 13:19:32.305130 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:32 crc kubenswrapper[4747]: I1126 13:19:32.305924 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:32 crc kubenswrapper[4747]: I1126 13:19:32.306441 4747 status_manager.go:851] "Failed to get status for pod" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ksg5q\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:32 crc kubenswrapper[4747]: I1126 13:19:32.307096 4747 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:33 crc kubenswrapper[4747]: I1126 13:19:33.186037 4747 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="32f9d5260189b3186542c4387eb90e50a86cc949ce323241a0d9bc0077691dba" exitCode=0 Nov 26 13:19:33 crc kubenswrapper[4747]: I1126 13:19:33.186164 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"32f9d5260189b3186542c4387eb90e50a86cc949ce323241a0d9bc0077691dba"} Nov 26 13:19:33 crc kubenswrapper[4747]: I1126 13:19:33.186413 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:19:33 crc kubenswrapper[4747]: I1126 13:19:33.188447 4747 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="722a046a-0d41-469c-ac7d-f58624c825aa" Nov 26 13:19:33 crc kubenswrapper[4747]: I1126 13:19:33.188640 4747 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="722a046a-0d41-469c-ac7d-f58624c825aa" Nov 26 13:19:33 crc kubenswrapper[4747]: I1126 13:19:33.189095 4747 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:33 crc kubenswrapper[4747]: E1126 13:19:33.189411 4747 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:33 crc kubenswrapper[4747]: I1126 13:19:33.189741 4747 status_manager.go:851] "Failed to get status for pod" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" pod="openshift-authentication/oauth-openshift-558db77b4-ksg5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ksg5q\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:33 crc kubenswrapper[4747]: I1126 13:19:33.190282 4747 status_manager.go:851] "Failed to get status for pod" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" pod="openshift-marketplace/redhat-marketplace-lpscr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lpscr\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:33 crc kubenswrapper[4747]: I1126 13:19:33.190798 4747 status_manager.go:851] "Failed to get status for pod" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" pod="openshift-marketplace/community-operators-94gg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-94gg7\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:33 crc kubenswrapper[4747]: I1126 13:19:33.191285 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Nov 26 13:19:34 crc kubenswrapper[4747]: I1126 13:19:34.221391 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f9575bd2eadc83319340b229d2bd1fb57615ec468106282b38e3e2ff2c190e26"} Nov 26 13:19:34 crc kubenswrapper[4747]: I1126 13:19:34.221996 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d53b0a5793ed9767d39522ebee4ba42790de47135eb8b9fa185cb19c1971ed45"} Nov 26 13:19:35 crc kubenswrapper[4747]: I1126 13:19:35.229269 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"caf1083f761e36b9b9513eca661d2be76aee7bcbc227a19bf1cb664bde13eec2"} Nov 26 13:19:35 crc kubenswrapper[4747]: I1126 13:19:35.229644 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"214d5bb673c87068d5ac546497cea8cb561840631534578ede9f1fe49061b55b"} Nov 26 13:19:35 crc kubenswrapper[4747]: I1126 13:19:35.229667 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:35 crc kubenswrapper[4747]: I1126 13:19:35.229680 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8fcf51532463e5daacc1e3225ba351ae8657008bdcb4e767ce3e1d355b3f46be"} Nov 26 13:19:35 crc kubenswrapper[4747]: I1126 13:19:35.229556 4747 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="722a046a-0d41-469c-ac7d-f58624c825aa" Nov 26 13:19:35 crc kubenswrapper[4747]: I1126 13:19:35.229699 4747 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="722a046a-0d41-469c-ac7d-f58624c825aa" Nov 26 13:19:36 crc kubenswrapper[4747]: I1126 13:19:36.828863 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:36 crc kubenswrapper[4747]: I1126 13:19:36.829372 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:36 crc kubenswrapper[4747]: I1126 13:19:36.839348 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:40 crc kubenswrapper[4747]: I1126 13:19:40.242766 4747 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:40 crc kubenswrapper[4747]: I1126 13:19:40.347640 4747 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0f9ba744-db4b-42ae-9c84-0098f55db8ab" Nov 26 13:19:41 crc kubenswrapper[4747]: I1126 13:19:41.270115 4747 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="722a046a-0d41-469c-ac7d-f58624c825aa" Nov 26 13:19:41 crc kubenswrapper[4747]: I1126 13:19:41.270148 4747 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="722a046a-0d41-469c-ac7d-f58624c825aa" Nov 26 13:19:41 crc kubenswrapper[4747]: I1126 13:19:41.273692 4747 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0f9ba744-db4b-42ae-9c84-0098f55db8ab" Nov 26 13:19:41 crc kubenswrapper[4747]: I1126 13:19:41.275249 4747 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://d53b0a5793ed9767d39522ebee4ba42790de47135eb8b9fa185cb19c1971ed45" Nov 26 13:19:41 crc kubenswrapper[4747]: I1126 13:19:41.275280 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:42 crc kubenswrapper[4747]: I1126 13:19:42.277341 4747 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="722a046a-0d41-469c-ac7d-f58624c825aa" Nov 26 13:19:42 crc kubenswrapper[4747]: I1126 13:19:42.278455 4747 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="722a046a-0d41-469c-ac7d-f58624c825aa" Nov 26 13:19:42 crc kubenswrapper[4747]: I1126 13:19:42.281407 4747 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0f9ba744-db4b-42ae-9c84-0098f55db8ab" Nov 26 13:19:46 crc kubenswrapper[4747]: I1126 13:19:46.576880 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:19:50 crc kubenswrapper[4747]: I1126 13:19:50.109403 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 26 13:19:50 crc kubenswrapper[4747]: I1126 13:19:50.198262 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 26 13:19:50 crc kubenswrapper[4747]: I1126 13:19:50.740596 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 26 13:19:50 crc kubenswrapper[4747]: I1126 13:19:50.954544 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 26 13:19:51 crc kubenswrapper[4747]: I1126 13:19:51.863859 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 26 13:19:51 crc kubenswrapper[4747]: I1126 13:19:51.865700 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 26 13:19:52 crc kubenswrapper[4747]: I1126 13:19:52.496938 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 26 13:19:52 crc kubenswrapper[4747]: I1126 13:19:52.571353 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 26 13:19:52 crc kubenswrapper[4747]: I1126 13:19:52.739311 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 26 13:19:52 crc kubenswrapper[4747]: I1126 13:19:52.834531 4747 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.117331 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.277780 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.440651 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.481826 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.530706 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.597043 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.676479 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.817225 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.883561 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.942411 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.968904 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 26 13:19:53 crc kubenswrapper[4747]: I1126 13:19:53.972663 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.018048 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.052589 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.136788 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.227151 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.231276 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.255838 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.263843 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.282919 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.324828 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.398232 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.407782 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.490895 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.505717 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.515234 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.523750 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.524130 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.551841 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.569100 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.605428 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.625481 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.743726 4747 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.748449 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpscr","openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/community-operators-94gg7","openshift-authentication/oauth-openshift-558db77b4-ksg5q"] Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.748520 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.755446 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.758018 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.777816 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.777791722 podStartE2EDuration="14.777791722s" podCreationTimestamp="2025-11-26 13:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:19:54.771842808 +0000 UTC m=+281.758153863" watchObservedRunningTime="2025-11-26 13:19:54.777791722 +0000 UTC m=+281.764102777" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.794686 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.797995 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.904272 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.975277 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 26 13:19:54 crc kubenswrapper[4747]: I1126 13:19:54.990761 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.024569 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.124223 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.143454 4747 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.144251 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.184454 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.234975 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.261117 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.293681 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.322949 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.359746 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.587750 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.634708 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.685350 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.753989 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.778179 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.784805 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.805657 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" path="/var/lib/kubelet/pods/a86b0da3-17e9-4b7f-a54b-3b54c8f7a906/volumes" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.807207 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" path="/var/lib/kubelet/pods/a8e66af2-1f24-47a6-9315-4ac97f474115/volumes" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.807832 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" path="/var/lib/kubelet/pods/ec6afd64-e5b6-4851-a35e-db5a9490cdcb/volumes" Nov 26 13:19:55 crc kubenswrapper[4747]: I1126 13:19:55.825630 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.011843 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.042160 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.124625 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.159560 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.231461 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.333391 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.364485 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.405458 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.433038 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.509574 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.510379 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.513001 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.544666 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.636770 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.699379 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.748472 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.751151 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.780849 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.813628 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.819022 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.824391 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.983768 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 26 13:19:56 crc kubenswrapper[4747]: I1126 13:19:56.997812 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.051211 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.136831 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.144426 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.147633 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.242434 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.272647 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.339716 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.391328 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.457982 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.606249 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.640922 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.674465 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.674525 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.700336 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.707383 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.726802 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.763704 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.879364 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.942472 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 13:19:57 crc kubenswrapper[4747]: I1126 13:19:57.955013 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.210149 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.216798 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.426753 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.519243 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.529293 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.585739 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.646049 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.676307 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.719370 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.728195 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.929584 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 26 13:19:58 crc kubenswrapper[4747]: I1126 13:19:58.936915 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.014868 4747 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.017460 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.068672 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.081501 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.121699 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.217339 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.238617 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.468722 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.475773 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.536530 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.585037 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.668211 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.691388 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.705318 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.711153 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.852369 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.900894 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 26 13:19:59 crc kubenswrapper[4747]: I1126 13:19:59.923017 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.149127 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.167901 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.169331 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.255306 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.366202 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.388717 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.445868 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.454129 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.469033 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.595024 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.604780 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.647196 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.678758 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.711423 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 26 13:20:00 crc kubenswrapper[4747]: I1126 13:20:00.798752 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.201998 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.227411 4747 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.252020 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.254336 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.254762 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.320575 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.400041 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.439087 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.591987 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.608580 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.633046 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.654295 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.810542 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.843445 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 26 13:20:01 crc kubenswrapper[4747]: I1126 13:20:01.889442 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.050278 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.089752 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.128301 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.151043 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.274553 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.295970 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.307473 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.348346 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.385989 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.512858 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.540724 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.562665 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.743249 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.784624 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.972807 4747 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 26 13:20:02 crc kubenswrapper[4747]: I1126 13:20:02.973073 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7bc00021a32ab121497bab16099ec6006c81d38525867e85e5c60f6b2acde2a5" gracePeriod=5 Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.003497 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.108329 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.121646 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.212673 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.227776 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.270939 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.305232 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.417894 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.501805 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.719964 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.764661 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 26 13:20:03 crc kubenswrapper[4747]: I1126 13:20:03.828415 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.219457 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.260419 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.260418 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.319776 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.359023 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.522976 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.551343 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.617964 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.674089 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.696226 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.746506 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.812883 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 26 13:20:04 crc kubenswrapper[4747]: I1126 13:20:04.942541 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.099327 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.134874 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.231789 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.646446 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5dc57f868f-f229b"] Nov 26 13:20:05 crc kubenswrapper[4747]: E1126 13:20:05.646848 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerName="extract-content" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.646872 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerName="extract-content" Nov 26 13:20:05 crc kubenswrapper[4747]: E1126 13:20:05.646900 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerName="extract-content" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.646913 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerName="extract-content" Nov 26 13:20:05 crc kubenswrapper[4747]: E1126 13:20:05.646932 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" containerName="installer" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.646945 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" containerName="installer" Nov 26 13:20:05 crc kubenswrapper[4747]: E1126 13:20:05.646965 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerName="registry-server" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.646977 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerName="registry-server" Nov 26 13:20:05 crc kubenswrapper[4747]: E1126 13:20:05.647002 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerName="registry-server" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.647014 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerName="registry-server" Nov 26 13:20:05 crc kubenswrapper[4747]: E1126 13:20:05.647028 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.647040 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 26 13:20:05 crc kubenswrapper[4747]: E1126 13:20:05.647078 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" containerName="oauth-openshift" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.647091 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" containerName="oauth-openshift" Nov 26 13:20:05 crc kubenswrapper[4747]: E1126 13:20:05.647115 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerName="extract-utilities" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.647128 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerName="extract-utilities" Nov 26 13:20:05 crc kubenswrapper[4747]: E1126 13:20:05.647146 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerName="extract-utilities" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.647158 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerName="extract-utilities" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.647426 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.647445 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6afd64-e5b6-4851-a35e-db5a9490cdcb" containerName="oauth-openshift" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.647473 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a86b0da3-17e9-4b7f-a54b-3b54c8f7a906" containerName="registry-server" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.647491 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4052850-78ba-4b9b-b0cd-b5608e621a2c" containerName="installer" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.647514 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e66af2-1f24-47a6-9315-4ac97f474115" containerName="registry-server" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.648300 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.658815 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.659114 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.659248 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.659962 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.660222 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.660348 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.660483 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.660964 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.661106 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.661336 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.662161 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.665079 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.683707 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dc57f868f-f229b"] Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.688311 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.692354 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.709805 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.761944 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762044 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-session\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762124 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762162 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/553093cd-946b-422a-814f-eee7f792dd2b-audit-dir\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762220 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-template-error\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762251 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-audit-policies\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762296 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762344 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762374 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762413 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762458 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762497 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gddv5\" (UniqueName: \"kubernetes.io/projected/553093cd-946b-422a-814f-eee7f792dd2b-kube-api-access-gddv5\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762537 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-template-login\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.762568 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863324 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-template-error\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863367 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-audit-policies\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863410 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863443 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863467 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863492 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863533 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863555 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gddv5\" (UniqueName: \"kubernetes.io/projected/553093cd-946b-422a-814f-eee7f792dd2b-kube-api-access-gddv5\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863611 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-template-login\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863649 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863678 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863704 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-session\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863728 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863750 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/553093cd-946b-422a-814f-eee7f792dd2b-audit-dir\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.863814 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/553093cd-946b-422a-814f-eee7f792dd2b-audit-dir\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.864208 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-audit-policies\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.865523 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.866012 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.866364 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.869263 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.869588 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.869725 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-template-error\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.869855 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.870751 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.870848 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-template-login\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.876554 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.876952 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/553093cd-946b-422a-814f-eee7f792dd2b-v4-0-config-system-session\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.886316 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gddv5\" (UniqueName: \"kubernetes.io/projected/553093cd-946b-422a-814f-eee7f792dd2b-kube-api-access-gddv5\") pod \"oauth-openshift-5dc57f868f-f229b\" (UID: \"553093cd-946b-422a-814f-eee7f792dd2b\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.919414 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 26 13:20:05 crc kubenswrapper[4747]: I1126 13:20:05.994427 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:06 crc kubenswrapper[4747]: I1126 13:20:06.027427 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 26 13:20:06 crc kubenswrapper[4747]: I1126 13:20:06.146703 4747 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 26 13:20:06 crc kubenswrapper[4747]: I1126 13:20:06.470461 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dc57f868f-f229b"] Nov 26 13:20:07 crc kubenswrapper[4747]: I1126 13:20:07.452250 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" event={"ID":"553093cd-946b-422a-814f-eee7f792dd2b","Type":"ContainerStarted","Data":"a7d2825579f9004da09747f589034dcf7ae4747b4f3b5102c039818ad13c3de1"} Nov 26 13:20:07 crc kubenswrapper[4747]: I1126 13:20:07.452593 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" event={"ID":"553093cd-946b-422a-814f-eee7f792dd2b","Type":"ContainerStarted","Data":"74e047771e6e532617b359ebd44b4c4ffe43c2deb272478dce2ff18c60714c4b"} Nov 26 13:20:07 crc kubenswrapper[4747]: I1126 13:20:07.453149 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:07 crc kubenswrapper[4747]: I1126 13:20:07.492639 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" podStartSLOduration=63.49261254 podStartE2EDuration="1m3.49261254s" podCreationTimestamp="2025-11-26 13:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:20:07.490712491 +0000 UTC m=+294.477023596" watchObservedRunningTime="2025-11-26 13:20:07.49261254 +0000 UTC m=+294.478923595" Nov 26 13:20:07 crc kubenswrapper[4747]: I1126 13:20:07.655207 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5dc57f868f-f229b" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.462454 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.462530 4747 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7bc00021a32ab121497bab16099ec6006c81d38525867e85e5c60f6b2acde2a5" exitCode=137 Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.573109 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.573254 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.703296 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.703382 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.703443 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.703447 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.703476 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.703505 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.703645 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.703710 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.703841 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.704337 4747 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.704388 4747 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.704414 4747 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.704437 4747 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.716916 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:20:08 crc kubenswrapper[4747]: I1126 13:20:08.806702 4747 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:09 crc kubenswrapper[4747]: I1126 13:20:09.471879 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 26 13:20:09 crc kubenswrapper[4747]: I1126 13:20:09.472096 4747 scope.go:117] "RemoveContainer" containerID="7bc00021a32ab121497bab16099ec6006c81d38525867e85e5c60f6b2acde2a5" Nov 26 13:20:09 crc kubenswrapper[4747]: I1126 13:20:09.472179 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:20:09 crc kubenswrapper[4747]: I1126 13:20:09.810708 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 26 13:20:13 crc kubenswrapper[4747]: I1126 13:20:13.566556 4747 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 26 13:20:16 crc kubenswrapper[4747]: I1126 13:20:16.035403 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 26 13:20:16 crc kubenswrapper[4747]: I1126 13:20:16.302616 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 26 13:20:17 crc kubenswrapper[4747]: I1126 13:20:17.052984 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 26 13:20:17 crc kubenswrapper[4747]: I1126 13:20:17.623493 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 26 13:20:21 crc kubenswrapper[4747]: I1126 13:20:21.756802 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 26 13:20:23 crc kubenswrapper[4747]: I1126 13:20:23.163070 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 26 13:20:23 crc kubenswrapper[4747]: I1126 13:20:23.292722 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 26 13:20:23 crc kubenswrapper[4747]: I1126 13:20:23.748856 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 26 13:20:25 crc kubenswrapper[4747]: I1126 13:20:25.372961 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 26 13:20:25 crc kubenswrapper[4747]: I1126 13:20:25.688843 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 26 13:20:28 crc kubenswrapper[4747]: I1126 13:20:28.264928 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 26 13:20:28 crc kubenswrapper[4747]: I1126 13:20:28.922717 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.213856 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pwnxf"] Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.214209 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" podUID="071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" containerName="controller-manager" containerID="cri-o://6ec21d65d617bb08263afb61cf677f54d30f6f5b38b0ab85b3d0b11e2c477bd2" gracePeriod=30 Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.309040 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9"] Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.309630 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" podUID="82daa056-c08b-4c56-817b-850b31cd016e" containerName="route-controller-manager" containerID="cri-o://64fd1f6a3909308b935a6465a2f05853c3b8e7ecbfad2d5b32d9abc047a036f5" gracePeriod=30 Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.604031 4747 generic.go:334] "Generic (PLEG): container finished" podID="071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" containerID="6ec21d65d617bb08263afb61cf677f54d30f6f5b38b0ab85b3d0b11e2c477bd2" exitCode=0 Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.604148 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" event={"ID":"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7","Type":"ContainerDied","Data":"6ec21d65d617bb08263afb61cf677f54d30f6f5b38b0ab85b3d0b11e2c477bd2"} Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.605344 4747 generic.go:334] "Generic (PLEG): container finished" podID="82daa056-c08b-4c56-817b-850b31cd016e" containerID="64fd1f6a3909308b935a6465a2f05853c3b8e7ecbfad2d5b32d9abc047a036f5" exitCode=0 Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.605368 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" event={"ID":"82daa056-c08b-4c56-817b-850b31cd016e","Type":"ContainerDied","Data":"64fd1f6a3909308b935a6465a2f05853c3b8e7ecbfad2d5b32d9abc047a036f5"} Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.635154 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.699444 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.708593 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-config\") pod \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.708676 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z92qx\" (UniqueName: \"kubernetes.io/projected/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-kube-api-access-z92qx\") pod \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.708715 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-proxy-ca-bundles\") pod \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.708739 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-serving-cert\") pod \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.708754 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-client-ca\") pod \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\" (UID: \"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7\") " Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.709399 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" (UID: "071895ee-e8a3-40e3-bfcd-8a175ab1ccf7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.709421 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-config" (OuterVolumeSpecName: "config") pod "071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" (UID: "071895ee-e8a3-40e3-bfcd-8a175ab1ccf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.711170 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-client-ca" (OuterVolumeSpecName: "client-ca") pod "071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" (UID: "071895ee-e8a3-40e3-bfcd-8a175ab1ccf7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.715891 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" (UID: "071895ee-e8a3-40e3-bfcd-8a175ab1ccf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.716136 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-kube-api-access-z92qx" (OuterVolumeSpecName: "kube-api-access-z92qx") pod "071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" (UID: "071895ee-e8a3-40e3-bfcd-8a175ab1ccf7"). InnerVolumeSpecName "kube-api-access-z92qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.809857 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-config\") pod \"82daa056-c08b-4c56-817b-850b31cd016e\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.809894 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-client-ca\") pod \"82daa056-c08b-4c56-817b-850b31cd016e\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.809965 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnzqg\" (UniqueName: \"kubernetes.io/projected/82daa056-c08b-4c56-817b-850b31cd016e-kube-api-access-vnzqg\") pod \"82daa056-c08b-4c56-817b-850b31cd016e\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.810010 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82daa056-c08b-4c56-817b-850b31cd016e-serving-cert\") pod \"82daa056-c08b-4c56-817b-850b31cd016e\" (UID: \"82daa056-c08b-4c56-817b-850b31cd016e\") " Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.810183 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z92qx\" (UniqueName: \"kubernetes.io/projected/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-kube-api-access-z92qx\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.810195 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.810204 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.810212 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.810220 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.810848 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-config" (OuterVolumeSpecName: "config") pod "82daa056-c08b-4c56-817b-850b31cd016e" (UID: "82daa056-c08b-4c56-817b-850b31cd016e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.810938 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-client-ca" (OuterVolumeSpecName: "client-ca") pod "82daa056-c08b-4c56-817b-850b31cd016e" (UID: "82daa056-c08b-4c56-817b-850b31cd016e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.813805 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82daa056-c08b-4c56-817b-850b31cd016e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "82daa056-c08b-4c56-817b-850b31cd016e" (UID: "82daa056-c08b-4c56-817b-850b31cd016e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.814425 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82daa056-c08b-4c56-817b-850b31cd016e-kube-api-access-vnzqg" (OuterVolumeSpecName: "kube-api-access-vnzqg") pod "82daa056-c08b-4c56-817b-850b31cd016e" (UID: "82daa056-c08b-4c56-817b-850b31cd016e"). InnerVolumeSpecName "kube-api-access-vnzqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.912017 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.912463 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82daa056-c08b-4c56-817b-850b31cd016e-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.912478 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnzqg\" (UniqueName: \"kubernetes.io/projected/82daa056-c08b-4c56-817b-850b31cd016e-kube-api-access-vnzqg\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:29 crc kubenswrapper[4747]: I1126 13:20:29.912489 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82daa056-c08b-4c56-817b-850b31cd016e-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.613438 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.613431 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9" event={"ID":"82daa056-c08b-4c56-817b-850b31cd016e","Type":"ContainerDied","Data":"3ac538765e121b042b2eb3bf0038772038a48cea9c97da6d63a9911cd5d8ffb3"} Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.613658 4747 scope.go:117] "RemoveContainer" containerID="64fd1f6a3909308b935a6465a2f05853c3b8e7ecbfad2d5b32d9abc047a036f5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.617223 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" event={"ID":"071895ee-e8a3-40e3-bfcd-8a175ab1ccf7","Type":"ContainerDied","Data":"707bf8dbb366227b54900ddd90d9abc160b96a29a0f5f2704322a1595328a89e"} Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.617353 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pwnxf" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.636240 4747 scope.go:117] "RemoveContainer" containerID="6ec21d65d617bb08263afb61cf677f54d30f6f5b38b0ab85b3d0b11e2c477bd2" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.648716 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pwnxf"] Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.653021 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pwnxf"] Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.663391 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7df4555b7b-xxmn5"] Nov 26 13:20:30 crc kubenswrapper[4747]: E1126 13:20:30.663795 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82daa056-c08b-4c56-817b-850b31cd016e" containerName="route-controller-manager" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.663824 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="82daa056-c08b-4c56-817b-850b31cd016e" containerName="route-controller-manager" Nov 26 13:20:30 crc kubenswrapper[4747]: E1126 13:20:30.663869 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" containerName="controller-manager" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.663885 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" containerName="controller-manager" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.664114 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" containerName="controller-manager" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.664132 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="82daa056-c08b-4c56-817b-850b31cd016e" containerName="route-controller-manager" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.664769 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.672697 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.672943 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.673133 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.674984 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.675049 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.674996 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.678944 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99"] Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.680231 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.685281 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9"] Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.688991 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.696036 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sgvf9"] Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.706550 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7df4555b7b-xxmn5"] Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.711180 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99"] Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.712842 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.713143 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.713321 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.713503 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.713697 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.713921 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.734656 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-client-ca\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.734713 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-config\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.734749 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-client-ca\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.734782 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqz52\" (UniqueName: \"kubernetes.io/projected/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-kube-api-access-pqz52\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.734804 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bnkj\" (UniqueName: \"kubernetes.io/projected/8235abf1-792f-4e5e-a664-ec9ad43e059b-kube-api-access-8bnkj\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.734833 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-serving-cert\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.734858 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-proxy-ca-bundles\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.734876 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-config\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.734908 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8235abf1-792f-4e5e-a664-ec9ad43e059b-serving-cert\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.835708 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-proxy-ca-bundles\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.835753 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-config\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.835797 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8235abf1-792f-4e5e-a664-ec9ad43e059b-serving-cert\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.835862 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-client-ca\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.835890 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-config\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.835924 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-client-ca\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.835957 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bnkj\" (UniqueName: \"kubernetes.io/projected/8235abf1-792f-4e5e-a664-ec9ad43e059b-kube-api-access-8bnkj\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.836088 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqz52\" (UniqueName: \"kubernetes.io/projected/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-kube-api-access-pqz52\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.836913 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-client-ca\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.836325 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-serving-cert\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.837074 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-client-ca\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.837234 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-config\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.837287 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-config\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.837937 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-proxy-ca-bundles\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.854640 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8235abf1-792f-4e5e-a664-ec9ad43e059b-serving-cert\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.855150 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqz52\" (UniqueName: \"kubernetes.io/projected/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-kube-api-access-pqz52\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.855620 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bnkj\" (UniqueName: \"kubernetes.io/projected/8235abf1-792f-4e5e-a664-ec9ad43e059b-kube-api-access-8bnkj\") pod \"route-controller-manager-7f5b8d9779-hkv99\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:30 crc kubenswrapper[4747]: I1126 13:20:30.856462 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-serving-cert\") pod \"controller-manager-7df4555b7b-xxmn5\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.023999 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.049332 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.213188 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7df4555b7b-xxmn5"] Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.271288 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99"] Nov 26 13:20:31 crc kubenswrapper[4747]: W1126 13:20:31.279290 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8235abf1_792f_4e5e_a664_ec9ad43e059b.slice/crio-b8ef084ab2fa8bb4aaa6bd562de10749907cf90b23f055d20475ffe4f6db9988 WatchSource:0}: Error finding container b8ef084ab2fa8bb4aaa6bd562de10749907cf90b23f055d20475ffe4f6db9988: Status 404 returned error can't find the container with id b8ef084ab2fa8bb4aaa6bd562de10749907cf90b23f055d20475ffe4f6db9988 Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.624796 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" event={"ID":"8235abf1-792f-4e5e-a664-ec9ad43e059b","Type":"ContainerStarted","Data":"3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c"} Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.625223 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.625240 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" event={"ID":"8235abf1-792f-4e5e-a664-ec9ad43e059b","Type":"ContainerStarted","Data":"b8ef084ab2fa8bb4aaa6bd562de10749907cf90b23f055d20475ffe4f6db9988"} Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.627404 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" event={"ID":"6f1f326e-9076-4cfc-be14-cc20ed38ff2b","Type":"ContainerStarted","Data":"cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b"} Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.627465 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" event={"ID":"6f1f326e-9076-4cfc-be14-cc20ed38ff2b","Type":"ContainerStarted","Data":"708414faa1d1b3fd7cb450c5e8619ed5c406e96ea2d3aec128d6d5c2b6038f7c"} Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.627651 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.635413 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.653504 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" podStartSLOduration=2.653488897 podStartE2EDuration="2.653488897s" podCreationTimestamp="2025-11-26 13:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:20:31.651499165 +0000 UTC m=+318.637810180" watchObservedRunningTime="2025-11-26 13:20:31.653488897 +0000 UTC m=+318.639799912" Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.676403 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" podStartSLOduration=2.676388237 podStartE2EDuration="2.676388237s" podCreationTimestamp="2025-11-26 13:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:20:31.674106448 +0000 UTC m=+318.660417463" watchObservedRunningTime="2025-11-26 13:20:31.676388237 +0000 UTC m=+318.662699252" Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.806705 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071895ee-e8a3-40e3-bfcd-8a175ab1ccf7" path="/var/lib/kubelet/pods/071895ee-e8a3-40e3-bfcd-8a175ab1ccf7/volumes" Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.807447 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82daa056-c08b-4c56-817b-850b31cd016e" path="/var/lib/kubelet/pods/82daa056-c08b-4c56-817b-850b31cd016e/volumes" Nov 26 13:20:31 crc kubenswrapper[4747]: I1126 13:20:31.936071 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:32 crc kubenswrapper[4747]: I1126 13:20:32.087693 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 26 13:20:32 crc kubenswrapper[4747]: I1126 13:20:32.181266 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 26 13:20:32 crc kubenswrapper[4747]: I1126 13:20:32.876649 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 26 13:20:34 crc kubenswrapper[4747]: I1126 13:20:34.385108 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 26 13:20:35 crc kubenswrapper[4747]: I1126 13:20:35.570261 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 26 13:20:36 crc kubenswrapper[4747]: I1126 13:20:36.736906 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.104920 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwrtn"] Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.105689 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cwrtn" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerName="registry-server" containerID="cri-o://a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4" gracePeriod=2 Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.357823 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.637617 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.660210 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-utilities\") pod \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.660328 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-catalog-content\") pod \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.660361 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxb6j\" (UniqueName: \"kubernetes.io/projected/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-kube-api-access-wxb6j\") pod \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\" (UID: \"12cb1fe3-c93c-4a2b-b13e-c660d9b34012\") " Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.661693 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-utilities" (OuterVolumeSpecName: "utilities") pod "12cb1fe3-c93c-4a2b-b13e-c660d9b34012" (UID: "12cb1fe3-c93c-4a2b-b13e-c660d9b34012"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.668462 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-kube-api-access-wxb6j" (OuterVolumeSpecName: "kube-api-access-wxb6j") pod "12cb1fe3-c93c-4a2b-b13e-c660d9b34012" (UID: "12cb1fe3-c93c-4a2b-b13e-c660d9b34012"). InnerVolumeSpecName "kube-api-access-wxb6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.683333 4747 generic.go:334] "Generic (PLEG): container finished" podID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerID="a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4" exitCode=0 Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.683387 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwrtn" event={"ID":"12cb1fe3-c93c-4a2b-b13e-c660d9b34012","Type":"ContainerDied","Data":"a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4"} Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.683429 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwrtn" event={"ID":"12cb1fe3-c93c-4a2b-b13e-c660d9b34012","Type":"ContainerDied","Data":"0834d714928b9ca2839042ec321b865594b89ceb977f74fa06a59c0b1a73ea24"} Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.683459 4747 scope.go:117] "RemoveContainer" containerID="a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.683621 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwrtn" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.713472 4747 scope.go:117] "RemoveContainer" containerID="a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.738264 4747 scope.go:117] "RemoveContainer" containerID="c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.760188 4747 scope.go:117] "RemoveContainer" containerID="a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.761463 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.761516 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxb6j\" (UniqueName: \"kubernetes.io/projected/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-kube-api-access-wxb6j\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:39 crc kubenswrapper[4747]: E1126 13:20:39.764609 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4\": container with ID starting with a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4 not found: ID does not exist" containerID="a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.764663 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4"} err="failed to get container status \"a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4\": rpc error: code = NotFound desc = could not find container \"a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4\": container with ID starting with a8fd7a0e2c8b5bd06bd8ba12e9ec9bf31cfead441d4cd762b28f8d0dcac56bf4 not found: ID does not exist" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.764706 4747 scope.go:117] "RemoveContainer" containerID="a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332" Nov 26 13:20:39 crc kubenswrapper[4747]: E1126 13:20:39.765044 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332\": container with ID starting with a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332 not found: ID does not exist" containerID="a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.765174 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332"} err="failed to get container status \"a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332\": rpc error: code = NotFound desc = could not find container \"a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332\": container with ID starting with a15c999e04867d2f1d25be2479cc7bc988bed8b58ede81d7247ef8fdd7125332 not found: ID does not exist" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.765206 4747 scope.go:117] "RemoveContainer" containerID="c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02" Nov 26 13:20:39 crc kubenswrapper[4747]: E1126 13:20:39.765480 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02\": container with ID starting with c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02 not found: ID does not exist" containerID="c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.765521 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02"} err="failed to get container status \"c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02\": rpc error: code = NotFound desc = could not find container \"c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02\": container with ID starting with c6604a4153b68e955f6483d2c3afd8b450da9f1c01fc9a624d4692be3bffed02 not found: ID does not exist" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.818585 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12cb1fe3-c93c-4a2b-b13e-c660d9b34012" (UID: "12cb1fe3-c93c-4a2b-b13e-c660d9b34012"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.864461 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12cb1fe3-c93c-4a2b-b13e-c660d9b34012-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:39 crc kubenswrapper[4747]: I1126 13:20:39.925248 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 26 13:20:40 crc kubenswrapper[4747]: I1126 13:20:40.027296 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwrtn"] Nov 26 13:20:40 crc kubenswrapper[4747]: I1126 13:20:40.033375 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cwrtn"] Nov 26 13:20:41 crc kubenswrapper[4747]: I1126 13:20:41.540783 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 26 13:20:41 crc kubenswrapper[4747]: I1126 13:20:41.811671 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" path="/var/lib/kubelet/pods/12cb1fe3-c93c-4a2b-b13e-c660d9b34012/volumes" Nov 26 13:20:42 crc kubenswrapper[4747]: I1126 13:20:42.727135 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7df4555b7b-xxmn5"] Nov 26 13:20:42 crc kubenswrapper[4747]: I1126 13:20:42.727454 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" podUID="6f1f326e-9076-4cfc-be14-cc20ed38ff2b" containerName="controller-manager" containerID="cri-o://cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b" gracePeriod=30 Nov 26 13:20:42 crc kubenswrapper[4747]: I1126 13:20:42.754741 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99"] Nov 26 13:20:42 crc kubenswrapper[4747]: I1126 13:20:42.755035 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" podUID="8235abf1-792f-4e5e-a664-ec9ad43e059b" containerName="route-controller-manager" containerID="cri-o://3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c" gracePeriod=30 Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.339812 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.346177 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.413571 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8235abf1-792f-4e5e-a664-ec9ad43e059b-serving-cert\") pod \"8235abf1-792f-4e5e-a664-ec9ad43e059b\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.413655 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bnkj\" (UniqueName: \"kubernetes.io/projected/8235abf1-792f-4e5e-a664-ec9ad43e059b-kube-api-access-8bnkj\") pod \"8235abf1-792f-4e5e-a664-ec9ad43e059b\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.413694 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-proxy-ca-bundles\") pod \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.413751 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-config\") pod \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.413798 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqz52\" (UniqueName: \"kubernetes.io/projected/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-kube-api-access-pqz52\") pod \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.413863 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-client-ca\") pod \"8235abf1-792f-4e5e-a664-ec9ad43e059b\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.413920 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-client-ca\") pod \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.413970 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-config\") pod \"8235abf1-792f-4e5e-a664-ec9ad43e059b\" (UID: \"8235abf1-792f-4e5e-a664-ec9ad43e059b\") " Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.414006 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-serving-cert\") pod \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\" (UID: \"6f1f326e-9076-4cfc-be14-cc20ed38ff2b\") " Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.417649 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6f1f326e-9076-4cfc-be14-cc20ed38ff2b" (UID: "6f1f326e-9076-4cfc-be14-cc20ed38ff2b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.418099 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-config" (OuterVolumeSpecName: "config") pod "6f1f326e-9076-4cfc-be14-cc20ed38ff2b" (UID: "6f1f326e-9076-4cfc-be14-cc20ed38ff2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.418144 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-client-ca" (OuterVolumeSpecName: "client-ca") pod "6f1f326e-9076-4cfc-be14-cc20ed38ff2b" (UID: "6f1f326e-9076-4cfc-be14-cc20ed38ff2b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.418832 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-client-ca" (OuterVolumeSpecName: "client-ca") pod "8235abf1-792f-4e5e-a664-ec9ad43e059b" (UID: "8235abf1-792f-4e5e-a664-ec9ad43e059b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.418947 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-config" (OuterVolumeSpecName: "config") pod "8235abf1-792f-4e5e-a664-ec9ad43e059b" (UID: "8235abf1-792f-4e5e-a664-ec9ad43e059b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.426074 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6f1f326e-9076-4cfc-be14-cc20ed38ff2b" (UID: "6f1f326e-9076-4cfc-be14-cc20ed38ff2b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.426088 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8235abf1-792f-4e5e-a664-ec9ad43e059b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8235abf1-792f-4e5e-a664-ec9ad43e059b" (UID: "8235abf1-792f-4e5e-a664-ec9ad43e059b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.426159 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8235abf1-792f-4e5e-a664-ec9ad43e059b-kube-api-access-8bnkj" (OuterVolumeSpecName: "kube-api-access-8bnkj") pod "8235abf1-792f-4e5e-a664-ec9ad43e059b" (UID: "8235abf1-792f-4e5e-a664-ec9ad43e059b"). InnerVolumeSpecName "kube-api-access-8bnkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.426254 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-kube-api-access-pqz52" (OuterVolumeSpecName: "kube-api-access-pqz52") pod "6f1f326e-9076-4cfc-be14-cc20ed38ff2b" (UID: "6f1f326e-9076-4cfc-be14-cc20ed38ff2b"). InnerVolumeSpecName "kube-api-access-pqz52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.500920 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.516189 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.516239 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.516262 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8235abf1-792f-4e5e-a664-ec9ad43e059b-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.516276 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.516290 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8235abf1-792f-4e5e-a664-ec9ad43e059b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.516310 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bnkj\" (UniqueName: \"kubernetes.io/projected/8235abf1-792f-4e5e-a664-ec9ad43e059b-kube-api-access-8bnkj\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.516323 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.516337 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.516349 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqz52\" (UniqueName: \"kubernetes.io/projected/6f1f326e-9076-4cfc-be14-cc20ed38ff2b-kube-api-access-pqz52\") on node \"crc\" DevicePath \"\"" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.733049 4747 generic.go:334] "Generic (PLEG): container finished" podID="8235abf1-792f-4e5e-a664-ec9ad43e059b" containerID="3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c" exitCode=0 Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.733295 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" event={"ID":"8235abf1-792f-4e5e-a664-ec9ad43e059b","Type":"ContainerDied","Data":"3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c"} Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.733333 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.733363 4747 scope.go:117] "RemoveContainer" containerID="3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.733344 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99" event={"ID":"8235abf1-792f-4e5e-a664-ec9ad43e059b","Type":"ContainerDied","Data":"b8ef084ab2fa8bb4aaa6bd562de10749907cf90b23f055d20475ffe4f6db9988"} Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.736010 4747 generic.go:334] "Generic (PLEG): container finished" podID="6f1f326e-9076-4cfc-be14-cc20ed38ff2b" containerID="cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b" exitCode=0 Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.736130 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" event={"ID":"6f1f326e-9076-4cfc-be14-cc20ed38ff2b","Type":"ContainerDied","Data":"cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b"} Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.736234 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" event={"ID":"6f1f326e-9076-4cfc-be14-cc20ed38ff2b","Type":"ContainerDied","Data":"708414faa1d1b3fd7cb450c5e8619ed5c406e96ea2d3aec128d6d5c2b6038f7c"} Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.736365 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df4555b7b-xxmn5" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.755658 4747 scope.go:117] "RemoveContainer" containerID="3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c" Nov 26 13:20:43 crc kubenswrapper[4747]: E1126 13:20:43.756323 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c\": container with ID starting with 3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c not found: ID does not exist" containerID="3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.756408 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c"} err="failed to get container status \"3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c\": rpc error: code = NotFound desc = could not find container \"3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c\": container with ID starting with 3fc0fe50fd3711215b2a050153ebfbb3060d546ac4a6ccd3fcf5574ca60ffc1c not found: ID does not exist" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.756464 4747 scope.go:117] "RemoveContainer" containerID="cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.784419 4747 scope.go:117] "RemoveContainer" containerID="cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.784748 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7df4555b7b-xxmn5"] Nov 26 13:20:43 crc kubenswrapper[4747]: E1126 13:20:43.784912 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b\": container with ID starting with cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b not found: ID does not exist" containerID="cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.784944 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b"} err="failed to get container status \"cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b\": rpc error: code = NotFound desc = could not find container \"cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b\": container with ID starting with cd4f573077976ea84dcfe1ec0fd7867e839b0959ab9e525295c231e6ef86db8b not found: ID does not exist" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.798161 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7df4555b7b-xxmn5"] Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.799256 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99"] Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.813173 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f1f326e-9076-4cfc-be14-cc20ed38ff2b" path="/var/lib/kubelet/pods/6f1f326e-9076-4cfc-be14-cc20ed38ff2b/volumes" Nov 26 13:20:43 crc kubenswrapper[4747]: I1126 13:20:43.814032 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5b8d9779-hkv99"] Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.672342 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf"] Nov 26 13:20:44 crc kubenswrapper[4747]: E1126 13:20:44.672953 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerName="registry-server" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.672976 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerName="registry-server" Nov 26 13:20:44 crc kubenswrapper[4747]: E1126 13:20:44.673002 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerName="extract-utilities" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.673014 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerName="extract-utilities" Nov 26 13:20:44 crc kubenswrapper[4747]: E1126 13:20:44.673042 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerName="extract-content" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.675911 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerName="extract-content" Nov 26 13:20:44 crc kubenswrapper[4747]: E1126 13:20:44.675959 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8235abf1-792f-4e5e-a664-ec9ad43e059b" containerName="route-controller-manager" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.675975 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8235abf1-792f-4e5e-a664-ec9ad43e059b" containerName="route-controller-manager" Nov 26 13:20:44 crc kubenswrapper[4747]: E1126 13:20:44.676002 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f1f326e-9076-4cfc-be14-cc20ed38ff2b" containerName="controller-manager" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.676015 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f1f326e-9076-4cfc-be14-cc20ed38ff2b" containerName="controller-manager" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.676214 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="12cb1fe3-c93c-4a2b-b13e-c660d9b34012" containerName="registry-server" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.676246 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="8235abf1-792f-4e5e-a664-ec9ad43e059b" containerName="route-controller-manager" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.676268 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f1f326e-9076-4cfc-be14-cc20ed38ff2b" containerName="controller-manager" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.676880 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.680164 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.680176 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.680557 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.680724 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.680760 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7tv6c"] Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.681866 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.686806 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.687003 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.687143 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.687271 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.687331 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.687993 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.688396 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.691982 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.702588 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.703562 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf"] Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.715188 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7tv6c"] Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.731174 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-config\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.731257 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbrb\" (UniqueName: \"kubernetes.io/projected/9d915b53-623d-40a4-a386-add26fb90c4a-kube-api-access-qtbrb\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.731428 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchfn\" (UniqueName: \"kubernetes.io/projected/75538030-097e-4793-864a-3915871e49ca-kube-api-access-xchfn\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.731500 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75538030-097e-4793-864a-3915871e49ca-serving-cert\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.731597 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-proxy-ca-bundles\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.732613 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-client-ca\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.732758 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-config\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.732799 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-client-ca\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.732939 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d915b53-623d-40a4-a386-add26fb90c4a-serving-cert\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.834872 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d915b53-623d-40a4-a386-add26fb90c4a-serving-cert\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.834988 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-config\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.835088 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbrb\" (UniqueName: \"kubernetes.io/projected/9d915b53-623d-40a4-a386-add26fb90c4a-kube-api-access-qtbrb\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.835144 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchfn\" (UniqueName: \"kubernetes.io/projected/75538030-097e-4793-864a-3915871e49ca-kube-api-access-xchfn\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.835241 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75538030-097e-4793-864a-3915871e49ca-serving-cert\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.835282 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-proxy-ca-bundles\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.835316 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-client-ca\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.835360 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-config\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.835415 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-client-ca\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.837007 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-client-ca\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.837358 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-config\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.838451 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-client-ca\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.839324 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-config\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.839418 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-proxy-ca-bundles\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.841902 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75538030-097e-4793-864a-3915871e49ca-serving-cert\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.844041 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d915b53-623d-40a4-a386-add26fb90c4a-serving-cert\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.869750 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchfn\" (UniqueName: \"kubernetes.io/projected/75538030-097e-4793-864a-3915871e49ca-kube-api-access-xchfn\") pod \"controller-manager-7c45df54bf-7tv6c\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:44 crc kubenswrapper[4747]: I1126 13:20:44.870954 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbrb\" (UniqueName: \"kubernetes.io/projected/9d915b53-623d-40a4-a386-add26fb90c4a-kube-api-access-qtbrb\") pod \"route-controller-manager-5bdf786554-854xf\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.011188 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.034929 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.356034 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf"] Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.407225 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7tv6c"] Nov 26 13:20:45 crc kubenswrapper[4747]: W1126 13:20:45.423981 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75538030_097e_4793_864a_3915871e49ca.slice/crio-36cdb550c7e17cf83caad4414e9bb601edebe517fe5b6100a6d3a2a863aefc8b WatchSource:0}: Error finding container 36cdb550c7e17cf83caad4414e9bb601edebe517fe5b6100a6d3a2a863aefc8b: Status 404 returned error can't find the container with id 36cdb550c7e17cf83caad4414e9bb601edebe517fe5b6100a6d3a2a863aefc8b Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.760157 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" event={"ID":"75538030-097e-4793-864a-3915871e49ca","Type":"ContainerStarted","Data":"6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767"} Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.760565 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.760588 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" event={"ID":"75538030-097e-4793-864a-3915871e49ca","Type":"ContainerStarted","Data":"36cdb550c7e17cf83caad4414e9bb601edebe517fe5b6100a6d3a2a863aefc8b"} Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.761799 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" event={"ID":"9d915b53-623d-40a4-a386-add26fb90c4a","Type":"ContainerStarted","Data":"ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295"} Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.761883 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.761896 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" event={"ID":"9d915b53-623d-40a4-a386-add26fb90c4a","Type":"ContainerStarted","Data":"4069e6f68ffb1ac62b824619b98ae0b6b1cf5e442db2dd2a5a83527f0a66480a"} Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.765932 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.779124 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" podStartSLOduration=3.779102377 podStartE2EDuration="3.779102377s" podCreationTimestamp="2025-11-26 13:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:20:45.778279096 +0000 UTC m=+332.764590111" watchObservedRunningTime="2025-11-26 13:20:45.779102377 +0000 UTC m=+332.765413392" Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.821496 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" podStartSLOduration=3.82147595 podStartE2EDuration="3.82147595s" podCreationTimestamp="2025-11-26 13:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:20:45.821013568 +0000 UTC m=+332.807324583" watchObservedRunningTime="2025-11-26 13:20:45.82147595 +0000 UTC m=+332.807786965" Nov 26 13:20:45 crc kubenswrapper[4747]: I1126 13:20:45.832666 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8235abf1-792f-4e5e-a664-ec9ad43e059b" path="/var/lib/kubelet/pods/8235abf1-792f-4e5e-a664-ec9ad43e059b/volumes" Nov 26 13:20:46 crc kubenswrapper[4747]: I1126 13:20:46.135622 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.213038 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7tv6c"] Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.214262 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" podUID="75538030-097e-4793-864a-3915871e49ca" containerName="controller-manager" containerID="cri-o://6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767" gracePeriod=30 Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.884972 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.935386 4747 generic.go:334] "Generic (PLEG): container finished" podID="75538030-097e-4793-864a-3915871e49ca" containerID="6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767" exitCode=0 Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.935433 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" event={"ID":"75538030-097e-4793-864a-3915871e49ca","Type":"ContainerDied","Data":"6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767"} Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.935472 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" event={"ID":"75538030-097e-4793-864a-3915871e49ca","Type":"ContainerDied","Data":"36cdb550c7e17cf83caad4414e9bb601edebe517fe5b6100a6d3a2a863aefc8b"} Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.935498 4747 scope.go:117] "RemoveContainer" containerID="6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767" Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.935522 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7tv6c" Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.959580 4747 scope.go:117] "RemoveContainer" containerID="6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767" Nov 26 13:21:09 crc kubenswrapper[4747]: E1126 13:21:09.960153 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767\": container with ID starting with 6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767 not found: ID does not exist" containerID="6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767" Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.960223 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767"} err="failed to get container status \"6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767\": rpc error: code = NotFound desc = could not find container \"6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767\": container with ID starting with 6a9b87808f2ef13da1e62108f458e91b0e0fc2bb57dca8d494aaa526ce796767 not found: ID does not exist" Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.988672 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-client-ca\") pod \"75538030-097e-4793-864a-3915871e49ca\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.988733 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-config\") pod \"75538030-097e-4793-864a-3915871e49ca\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.988812 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-proxy-ca-bundles\") pod \"75538030-097e-4793-864a-3915871e49ca\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.988852 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xchfn\" (UniqueName: \"kubernetes.io/projected/75538030-097e-4793-864a-3915871e49ca-kube-api-access-xchfn\") pod \"75538030-097e-4793-864a-3915871e49ca\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.988916 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75538030-097e-4793-864a-3915871e49ca-serving-cert\") pod \"75538030-097e-4793-864a-3915871e49ca\" (UID: \"75538030-097e-4793-864a-3915871e49ca\") " Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.990482 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "75538030-097e-4793-864a-3915871e49ca" (UID: "75538030-097e-4793-864a-3915871e49ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.990536 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-config" (OuterVolumeSpecName: "config") pod "75538030-097e-4793-864a-3915871e49ca" (UID: "75538030-097e-4793-864a-3915871e49ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:21:09 crc kubenswrapper[4747]: I1126 13:21:09.991119 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "75538030-097e-4793-864a-3915871e49ca" (UID: "75538030-097e-4793-864a-3915871e49ca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.002553 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75538030-097e-4793-864a-3915871e49ca-kube-api-access-xchfn" (OuterVolumeSpecName: "kube-api-access-xchfn") pod "75538030-097e-4793-864a-3915871e49ca" (UID: "75538030-097e-4793-864a-3915871e49ca"). InnerVolumeSpecName "kube-api-access-xchfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.006598 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75538030-097e-4793-864a-3915871e49ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "75538030-097e-4793-864a-3915871e49ca" (UID: "75538030-097e-4793-864a-3915871e49ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.092291 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75538030-097e-4793-864a-3915871e49ca-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.092596 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.092758 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.092881 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75538030-097e-4793-864a-3915871e49ca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.093026 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xchfn\" (UniqueName: \"kubernetes.io/projected/75538030-097e-4793-864a-3915871e49ca-kube-api-access-xchfn\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.287762 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7tv6c"] Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.292570 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7tv6c"] Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.686491 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw"] Nov 26 13:21:10 crc kubenswrapper[4747]: E1126 13:21:10.686746 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75538030-097e-4793-864a-3915871e49ca" containerName="controller-manager" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.686765 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="75538030-097e-4793-864a-3915871e49ca" containerName="controller-manager" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.686933 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="75538030-097e-4793-864a-3915871e49ca" containerName="controller-manager" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.687424 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.691560 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.691830 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.692719 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.692734 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.693048 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.693265 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.700966 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-proxy-ca-bundles\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.701039 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-client-ca\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.701133 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-serving-cert\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.701187 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgzzh\" (UniqueName: \"kubernetes.io/projected/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-kube-api-access-pgzzh\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.701222 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-config\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.704832 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.706911 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw"] Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.802281 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-proxy-ca-bundles\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.802625 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-client-ca\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.802835 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-serving-cert\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.802993 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgzzh\" (UniqueName: \"kubernetes.io/projected/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-kube-api-access-pgzzh\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.803148 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-config\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.804294 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-client-ca\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.804456 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-proxy-ca-bundles\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.805046 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-config\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.810810 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-serving-cert\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:10 crc kubenswrapper[4747]: I1126 13:21:10.828574 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgzzh\" (UniqueName: \"kubernetes.io/projected/a2b98dd5-009e-41c0-9cba-37ce22d55ae3-kube-api-access-pgzzh\") pod \"controller-manager-7b498f4dd7-pzqbw\" (UID: \"a2b98dd5-009e-41c0-9cba-37ce22d55ae3\") " pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:11 crc kubenswrapper[4747]: I1126 13:21:11.043197 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:11 crc kubenswrapper[4747]: I1126 13:21:11.525174 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw"] Nov 26 13:21:11 crc kubenswrapper[4747]: W1126 13:21:11.536112 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2b98dd5_009e_41c0_9cba_37ce22d55ae3.slice/crio-c0e987b18966a83d5f4234d69d436c23554f30c9658477d9900d692176cac4e4 WatchSource:0}: Error finding container c0e987b18966a83d5f4234d69d436c23554f30c9658477d9900d692176cac4e4: Status 404 returned error can't find the container with id c0e987b18966a83d5f4234d69d436c23554f30c9658477d9900d692176cac4e4 Nov 26 13:21:11 crc kubenswrapper[4747]: I1126 13:21:11.808192 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75538030-097e-4793-864a-3915871e49ca" path="/var/lib/kubelet/pods/75538030-097e-4793-864a-3915871e49ca/volumes" Nov 26 13:21:11 crc kubenswrapper[4747]: I1126 13:21:11.952104 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" event={"ID":"a2b98dd5-009e-41c0-9cba-37ce22d55ae3","Type":"ContainerStarted","Data":"8b4a3a34bbbc7aa87b56c35c94d2a976c0e838c44aaf496d015245522c043ab3"} Nov 26 13:21:11 crc kubenswrapper[4747]: I1126 13:21:11.952204 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" event={"ID":"a2b98dd5-009e-41c0-9cba-37ce22d55ae3","Type":"ContainerStarted","Data":"c0e987b18966a83d5f4234d69d436c23554f30c9658477d9900d692176cac4e4"} Nov 26 13:21:11 crc kubenswrapper[4747]: I1126 13:21:11.952408 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:11 crc kubenswrapper[4747]: I1126 13:21:11.962504 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" Nov 26 13:21:11 crc kubenswrapper[4747]: I1126 13:21:11.990133 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b498f4dd7-pzqbw" podStartSLOduration=2.990087052 podStartE2EDuration="2.990087052s" podCreationTimestamp="2025-11-26 13:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:21:11.98264383 +0000 UTC m=+358.968954855" watchObservedRunningTime="2025-11-26 13:21:11.990087052 +0000 UTC m=+358.976398077" Nov 26 13:21:22 crc kubenswrapper[4747]: I1126 13:21:22.943827 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qsbxg"] Nov 26 13:21:22 crc kubenswrapper[4747]: I1126 13:21:22.949958 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:22 crc kubenswrapper[4747]: I1126 13:21:22.970826 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qsbxg"] Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.081442 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.081504 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w964p\" (UniqueName: \"kubernetes.io/projected/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-kube-api-access-w964p\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.081642 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-trusted-ca\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.081909 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.081997 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-registry-certificates\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.082044 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.082395 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-bound-sa-token\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.082542 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-registry-tls\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.110708 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.183516 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-registry-certificates\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.183583 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.183645 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-bound-sa-token\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.183686 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-registry-tls\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.183750 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.183798 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w964p\" (UniqueName: \"kubernetes.io/projected/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-kube-api-access-w964p\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.183830 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-trusted-ca\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.184669 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.185775 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-trusted-ca\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.187313 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-registry-certificates\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.192239 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.197245 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-registry-tls\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.211034 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-bound-sa-token\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.215273 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w964p\" (UniqueName: \"kubernetes.io/projected/4e793927-69ec-4c69-b4e0-4d5cf2c43c1f-kube-api-access-w964p\") pod \"image-registry-66df7c8f76-qsbxg\" (UID: \"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f\") " pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.298385 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:23 crc kubenswrapper[4747]: I1126 13:21:23.779507 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qsbxg"] Nov 26 13:21:24 crc kubenswrapper[4747]: I1126 13:21:24.039681 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" event={"ID":"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f","Type":"ContainerStarted","Data":"40965b279760d774ddb9740962ab9697f4087c8b626030499debff13a9fa8bbe"} Nov 26 13:21:24 crc kubenswrapper[4747]: I1126 13:21:24.039721 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" event={"ID":"4e793927-69ec-4c69-b4e0-4d5cf2c43c1f","Type":"ContainerStarted","Data":"7a84e96c65beb8d5ff36bcd5dc13fef941e8b88c9872cf64dc9f7fa302396d1c"} Nov 26 13:21:24 crc kubenswrapper[4747]: I1126 13:21:24.040514 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.198897 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" podStartSLOduration=7.198876154 podStartE2EDuration="7.198876154s" podCreationTimestamp="2025-11-26 13:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:21:24.074044323 +0000 UTC m=+371.060355338" watchObservedRunningTime="2025-11-26 13:21:29.198876154 +0000 UTC m=+376.185187169" Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.200821 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf"] Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.201211 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" podUID="9d915b53-623d-40a4-a386-add26fb90c4a" containerName="route-controller-manager" containerID="cri-o://ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295" gracePeriod=30 Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.687365 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.781407 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-config\") pod \"9d915b53-623d-40a4-a386-add26fb90c4a\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.781468 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d915b53-623d-40a4-a386-add26fb90c4a-serving-cert\") pod \"9d915b53-623d-40a4-a386-add26fb90c4a\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.781519 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtbrb\" (UniqueName: \"kubernetes.io/projected/9d915b53-623d-40a4-a386-add26fb90c4a-kube-api-access-qtbrb\") pod \"9d915b53-623d-40a4-a386-add26fb90c4a\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.781541 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-client-ca\") pod \"9d915b53-623d-40a4-a386-add26fb90c4a\" (UID: \"9d915b53-623d-40a4-a386-add26fb90c4a\") " Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.782861 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d915b53-623d-40a4-a386-add26fb90c4a" (UID: "9d915b53-623d-40a4-a386-add26fb90c4a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.783911 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-config" (OuterVolumeSpecName: "config") pod "9d915b53-623d-40a4-a386-add26fb90c4a" (UID: "9d915b53-623d-40a4-a386-add26fb90c4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.787204 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d915b53-623d-40a4-a386-add26fb90c4a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d915b53-623d-40a4-a386-add26fb90c4a" (UID: "9d915b53-623d-40a4-a386-add26fb90c4a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.787660 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d915b53-623d-40a4-a386-add26fb90c4a-kube-api-access-qtbrb" (OuterVolumeSpecName: "kube-api-access-qtbrb") pod "9d915b53-623d-40a4-a386-add26fb90c4a" (UID: "9d915b53-623d-40a4-a386-add26fb90c4a"). InnerVolumeSpecName "kube-api-access-qtbrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.883508 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.883566 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d915b53-623d-40a4-a386-add26fb90c4a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.883595 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtbrb\" (UniqueName: \"kubernetes.io/projected/9d915b53-623d-40a4-a386-add26fb90c4a-kube-api-access-qtbrb\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:29 crc kubenswrapper[4747]: I1126 13:21:29.883622 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d915b53-623d-40a4-a386-add26fb90c4a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.085544 4747 generic.go:334] "Generic (PLEG): container finished" podID="9d915b53-623d-40a4-a386-add26fb90c4a" containerID="ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295" exitCode=0 Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.085642 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.085641 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" event={"ID":"9d915b53-623d-40a4-a386-add26fb90c4a","Type":"ContainerDied","Data":"ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295"} Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.086431 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf" event={"ID":"9d915b53-623d-40a4-a386-add26fb90c4a","Type":"ContainerDied","Data":"4069e6f68ffb1ac62b824619b98ae0b6b1cf5e442db2dd2a5a83527f0a66480a"} Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.086468 4747 scope.go:117] "RemoveContainer" containerID="ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.118675 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf"] Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.121898 4747 scope.go:117] "RemoveContainer" containerID="ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295" Nov 26 13:21:30 crc kubenswrapper[4747]: E1126 13:21:30.123387 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295\": container with ID starting with ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295 not found: ID does not exist" containerID="ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.123502 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295"} err="failed to get container status \"ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295\": rpc error: code = NotFound desc = could not find container \"ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295\": container with ID starting with ded3a167dfea801d02dcf2d155605881e0ecd8b4eb440af771c5b531c0235295 not found: ID does not exist" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.126538 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdf786554-854xf"] Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.703811 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv"] Nov 26 13:21:30 crc kubenswrapper[4747]: E1126 13:21:30.704148 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d915b53-623d-40a4-a386-add26fb90c4a" containerName="route-controller-manager" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.704169 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d915b53-623d-40a4-a386-add26fb90c4a" containerName="route-controller-manager" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.704511 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d915b53-623d-40a4-a386-add26fb90c4a" containerName="route-controller-manager" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.705818 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.709770 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.710228 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.710520 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.710566 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.710679 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.715708 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.725922 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv"] Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.799410 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllhh\" (UniqueName: \"kubernetes.io/projected/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-kube-api-access-wllhh\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.799538 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-serving-cert\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.799581 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-client-ca\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.799657 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-config\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.900797 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-serving-cert\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.901298 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-client-ca\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.901599 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-config\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.901685 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wllhh\" (UniqueName: \"kubernetes.io/projected/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-kube-api-access-wllhh\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.903522 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-client-ca\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.904147 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-config\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.911196 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-serving-cert\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:30 crc kubenswrapper[4747]: I1126 13:21:30.925821 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wllhh\" (UniqueName: \"kubernetes.io/projected/31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5-kube-api-access-wllhh\") pod \"route-controller-manager-5564698bbb-jhdcv\" (UID: \"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5\") " pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:31 crc kubenswrapper[4747]: I1126 13:21:31.041233 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:31 crc kubenswrapper[4747]: I1126 13:21:31.520764 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv"] Nov 26 13:21:31 crc kubenswrapper[4747]: W1126 13:21:31.532123 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31bacf09_ddd5_4b02_9cf3_5b7fc83d9ce5.slice/crio-dda3e1bdf290cc3743da39822e52c4679dfce7fad0618fdfa17faff0e8c85a30 WatchSource:0}: Error finding container dda3e1bdf290cc3743da39822e52c4679dfce7fad0618fdfa17faff0e8c85a30: Status 404 returned error can't find the container with id dda3e1bdf290cc3743da39822e52c4679dfce7fad0618fdfa17faff0e8c85a30 Nov 26 13:21:31 crc kubenswrapper[4747]: I1126 13:21:31.812931 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d915b53-623d-40a4-a386-add26fb90c4a" path="/var/lib/kubelet/pods/9d915b53-623d-40a4-a386-add26fb90c4a/volumes" Nov 26 13:21:32 crc kubenswrapper[4747]: I1126 13:21:32.104383 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" event={"ID":"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5","Type":"ContainerStarted","Data":"a1a7df2b8ee05cf3d872e7c81c021d4ad6e8b982f013feac1bb9ffef5921edbc"} Nov 26 13:21:32 crc kubenswrapper[4747]: I1126 13:21:32.104446 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" event={"ID":"31bacf09-ddd5-4b02-9cf3-5b7fc83d9ce5","Type":"ContainerStarted","Data":"dda3e1bdf290cc3743da39822e52c4679dfce7fad0618fdfa17faff0e8c85a30"} Nov 26 13:21:32 crc kubenswrapper[4747]: I1126 13:21:32.104768 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:32 crc kubenswrapper[4747]: I1126 13:21:32.113764 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" Nov 26 13:21:32 crc kubenswrapper[4747]: I1126 13:21:32.126155 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5564698bbb-jhdcv" podStartSLOduration=3.126130165 podStartE2EDuration="3.126130165s" podCreationTimestamp="2025-11-26 13:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:21:32.121813563 +0000 UTC m=+379.108124618" watchObservedRunningTime="2025-11-26 13:21:32.126130165 +0000 UTC m=+379.112441240" Nov 26 13:21:33 crc kubenswrapper[4747]: I1126 13:21:33.418349 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:21:33 crc kubenswrapper[4747]: I1126 13:21:33.418663 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.688947 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6s4vv"] Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.689937 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6s4vv" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerName="registry-server" containerID="cri-o://4861e583f15f0a681eb5b84d073ae394c2648fc575e2b60f085000e7aad749db" gracePeriod=30 Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.696636 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4f7qf"] Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.697256 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4f7qf" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" containerName="registry-server" containerID="cri-o://8f3e255b0ef929e08ea9d3c725e32c4a1a784d36d0f6e3460765fbcb13d182f4" gracePeriod=30 Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.719553 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqvkf"] Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.719836 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" podUID="30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" containerName="marketplace-operator" containerID="cri-o://6f5be912877d3cc528e6d6d5458005076af34a582e9a9a4d7732b865158dd190" gracePeriod=30 Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.726188 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqbp5"] Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.726536 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rqbp5" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerName="registry-server" containerID="cri-o://0c0ff5a2b0a6151346be6928a092ce684f9b131155dd0b470ba747b8a1341814" gracePeriod=30 Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.733090 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hf2r6"] Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.733337 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hf2r6" podUID="e56046c3-771a-4f54-afba-59f160f1e415" containerName="registry-server" containerID="cri-o://403921d7cbdc4b0e8973ce49b40a023d0756e83c7cc656121199980438b46dc9" gracePeriod=30 Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.751822 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mqnn7"] Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.752603 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.758722 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mqnn7"] Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.837989 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjqd\" (UniqueName: \"kubernetes.io/projected/20bd504f-0b9c-407b-968c-a2ef32da0158-kube-api-access-gqjqd\") pod \"marketplace-operator-79b997595-mqnn7\" (UID: \"20bd504f-0b9c-407b-968c-a2ef32da0158\") " pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.838133 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/20bd504f-0b9c-407b-968c-a2ef32da0158-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mqnn7\" (UID: \"20bd504f-0b9c-407b-968c-a2ef32da0158\") " pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.838190 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20bd504f-0b9c-407b-968c-a2ef32da0158-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mqnn7\" (UID: \"20bd504f-0b9c-407b-968c-a2ef32da0158\") " pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.938859 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqjqd\" (UniqueName: \"kubernetes.io/projected/20bd504f-0b9c-407b-968c-a2ef32da0158-kube-api-access-gqjqd\") pod \"marketplace-operator-79b997595-mqnn7\" (UID: \"20bd504f-0b9c-407b-968c-a2ef32da0158\") " pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.938903 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/20bd504f-0b9c-407b-968c-a2ef32da0158-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mqnn7\" (UID: \"20bd504f-0b9c-407b-968c-a2ef32da0158\") " pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.938929 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20bd504f-0b9c-407b-968c-a2ef32da0158-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mqnn7\" (UID: \"20bd504f-0b9c-407b-968c-a2ef32da0158\") " pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.939986 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20bd504f-0b9c-407b-968c-a2ef32da0158-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mqnn7\" (UID: \"20bd504f-0b9c-407b-968c-a2ef32da0158\") " pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.944968 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/20bd504f-0b9c-407b-968c-a2ef32da0158-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mqnn7\" (UID: \"20bd504f-0b9c-407b-968c-a2ef32da0158\") " pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:39 crc kubenswrapper[4747]: I1126 13:21:39.953191 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqjqd\" (UniqueName: \"kubernetes.io/projected/20bd504f-0b9c-407b-968c-a2ef32da0158-kube-api-access-gqjqd\") pod \"marketplace-operator-79b997595-mqnn7\" (UID: \"20bd504f-0b9c-407b-968c-a2ef32da0158\") " pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.158761 4747 generic.go:334] "Generic (PLEG): container finished" podID="af471b2c-feb1-40af-bb70-4b41459277c3" containerID="8f3e255b0ef929e08ea9d3c725e32c4a1a784d36d0f6e3460765fbcb13d182f4" exitCode=0 Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.158834 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f7qf" event={"ID":"af471b2c-feb1-40af-bb70-4b41459277c3","Type":"ContainerDied","Data":"8f3e255b0ef929e08ea9d3c725e32c4a1a784d36d0f6e3460765fbcb13d182f4"} Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.163223 4747 generic.go:334] "Generic (PLEG): container finished" podID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerID="4861e583f15f0a681eb5b84d073ae394c2648fc575e2b60f085000e7aad749db" exitCode=0 Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.163302 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s4vv" event={"ID":"43888409-bce4-40b7-bd9c-4c505b3929b0","Type":"ContainerDied","Data":"4861e583f15f0a681eb5b84d073ae394c2648fc575e2b60f085000e7aad749db"} Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.165891 4747 generic.go:334] "Generic (PLEG): container finished" podID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerID="0c0ff5a2b0a6151346be6928a092ce684f9b131155dd0b470ba747b8a1341814" exitCode=0 Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.165947 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqbp5" event={"ID":"4d0101a0-a045-41c3-8387-7a84e8236d65","Type":"ContainerDied","Data":"0c0ff5a2b0a6151346be6928a092ce684f9b131155dd0b470ba747b8a1341814"} Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.168244 4747 generic.go:334] "Generic (PLEG): container finished" podID="e56046c3-771a-4f54-afba-59f160f1e415" containerID="403921d7cbdc4b0e8973ce49b40a023d0756e83c7cc656121199980438b46dc9" exitCode=0 Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.168310 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hf2r6" event={"ID":"e56046c3-771a-4f54-afba-59f160f1e415","Type":"ContainerDied","Data":"403921d7cbdc4b0e8973ce49b40a023d0756e83c7cc656121199980438b46dc9"} Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.170039 4747 generic.go:334] "Generic (PLEG): container finished" podID="30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" containerID="6f5be912877d3cc528e6d6d5458005076af34a582e9a9a4d7732b865158dd190" exitCode=0 Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.170096 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" event={"ID":"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d","Type":"ContainerDied","Data":"6f5be912877d3cc528e6d6d5458005076af34a582e9a9a4d7732b865158dd190"} Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.199556 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.213723 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.344212 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.363566 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5vhw\" (UniqueName: \"kubernetes.io/projected/af471b2c-feb1-40af-bb70-4b41459277c3-kube-api-access-k5vhw\") pod \"af471b2c-feb1-40af-bb70-4b41459277c3\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.363611 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-catalog-content\") pod \"af471b2c-feb1-40af-bb70-4b41459277c3\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.363664 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-utilities\") pod \"af471b2c-feb1-40af-bb70-4b41459277c3\" (UID: \"af471b2c-feb1-40af-bb70-4b41459277c3\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.365316 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-utilities" (OuterVolumeSpecName: "utilities") pod "af471b2c-feb1-40af-bb70-4b41459277c3" (UID: "af471b2c-feb1-40af-bb70-4b41459277c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.368673 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af471b2c-feb1-40af-bb70-4b41459277c3-kube-api-access-k5vhw" (OuterVolumeSpecName: "kube-api-access-k5vhw") pod "af471b2c-feb1-40af-bb70-4b41459277c3" (UID: "af471b2c-feb1-40af-bb70-4b41459277c3"). InnerVolumeSpecName "kube-api-access-k5vhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.416431 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.423502 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.425717 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.466708 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-catalog-content\") pod \"e56046c3-771a-4f54-afba-59f160f1e415\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.466902 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af471b2c-feb1-40af-bb70-4b41459277c3" (UID: "af471b2c-feb1-40af-bb70-4b41459277c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.466929 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgwn6\" (UniqueName: \"kubernetes.io/projected/e56046c3-771a-4f54-afba-59f160f1e415-kube-api-access-tgwn6\") pod \"e56046c3-771a-4f54-afba-59f160f1e415\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.467085 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-utilities\") pod \"e56046c3-771a-4f54-afba-59f160f1e415\" (UID: \"e56046c3-771a-4f54-afba-59f160f1e415\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.467611 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5vhw\" (UniqueName: \"kubernetes.io/projected/af471b2c-feb1-40af-bb70-4b41459277c3-kube-api-access-k5vhw\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.467641 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.467656 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af471b2c-feb1-40af-bb70-4b41459277c3-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.469359 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-utilities" (OuterVolumeSpecName: "utilities") pod "e56046c3-771a-4f54-afba-59f160f1e415" (UID: "e56046c3-771a-4f54-afba-59f160f1e415"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.472505 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56046c3-771a-4f54-afba-59f160f1e415-kube-api-access-tgwn6" (OuterVolumeSpecName: "kube-api-access-tgwn6") pod "e56046c3-771a-4f54-afba-59f160f1e415" (UID: "e56046c3-771a-4f54-afba-59f160f1e415"). InnerVolumeSpecName "kube-api-access-tgwn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.560645 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e56046c3-771a-4f54-afba-59f160f1e415" (UID: "e56046c3-771a-4f54-afba-59f160f1e415"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.568706 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-utilities\") pod \"4d0101a0-a045-41c3-8387-7a84e8236d65\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.568870 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lftg8\" (UniqueName: \"kubernetes.io/projected/4d0101a0-a045-41c3-8387-7a84e8236d65-kube-api-access-lftg8\") pod \"4d0101a0-a045-41c3-8387-7a84e8236d65\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.568899 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-catalog-content\") pod \"43888409-bce4-40b7-bd9c-4c505b3929b0\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.568950 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn2gt\" (UniqueName: \"kubernetes.io/projected/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-kube-api-access-nn2gt\") pod \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.568979 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-trusted-ca\") pod \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.568999 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-utilities\") pod \"43888409-bce4-40b7-bd9c-4c505b3929b0\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.569017 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-operator-metrics\") pod \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\" (UID: \"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.569033 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s54tw\" (UniqueName: \"kubernetes.io/projected/43888409-bce4-40b7-bd9c-4c505b3929b0-kube-api-access-s54tw\") pod \"43888409-bce4-40b7-bd9c-4c505b3929b0\" (UID: \"43888409-bce4-40b7-bd9c-4c505b3929b0\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.569092 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-catalog-content\") pod \"4d0101a0-a045-41c3-8387-7a84e8236d65\" (UID: \"4d0101a0-a045-41c3-8387-7a84e8236d65\") " Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.569358 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.569372 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56046c3-771a-4f54-afba-59f160f1e415-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.569382 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgwn6\" (UniqueName: \"kubernetes.io/projected/e56046c3-771a-4f54-afba-59f160f1e415-kube-api-access-tgwn6\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.571950 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-utilities" (OuterVolumeSpecName: "utilities") pod "4d0101a0-a045-41c3-8387-7a84e8236d65" (UID: "4d0101a0-a045-41c3-8387-7a84e8236d65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.572544 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" (UID: "30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.573276 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-utilities" (OuterVolumeSpecName: "utilities") pod "43888409-bce4-40b7-bd9c-4c505b3929b0" (UID: "43888409-bce4-40b7-bd9c-4c505b3929b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.574968 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0101a0-a045-41c3-8387-7a84e8236d65-kube-api-access-lftg8" (OuterVolumeSpecName: "kube-api-access-lftg8") pod "4d0101a0-a045-41c3-8387-7a84e8236d65" (UID: "4d0101a0-a045-41c3-8387-7a84e8236d65"). InnerVolumeSpecName "kube-api-access-lftg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.575205 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-kube-api-access-nn2gt" (OuterVolumeSpecName: "kube-api-access-nn2gt") pod "30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" (UID: "30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d"). InnerVolumeSpecName "kube-api-access-nn2gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.575631 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" (UID: "30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.575849 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43888409-bce4-40b7-bd9c-4c505b3929b0-kube-api-access-s54tw" (OuterVolumeSpecName: "kube-api-access-s54tw") pod "43888409-bce4-40b7-bd9c-4c505b3929b0" (UID: "43888409-bce4-40b7-bd9c-4c505b3929b0"). InnerVolumeSpecName "kube-api-access-s54tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.587814 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d0101a0-a045-41c3-8387-7a84e8236d65" (UID: "4d0101a0-a045-41c3-8387-7a84e8236d65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.618175 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43888409-bce4-40b7-bd9c-4c505b3929b0" (UID: "43888409-bce4-40b7-bd9c-4c505b3929b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.671042 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.671098 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0101a0-a045-41c3-8387-7a84e8236d65-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.671110 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lftg8\" (UniqueName: \"kubernetes.io/projected/4d0101a0-a045-41c3-8387-7a84e8236d65-kube-api-access-lftg8\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.671122 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.671131 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn2gt\" (UniqueName: \"kubernetes.io/projected/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-kube-api-access-nn2gt\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.671140 4747 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.671148 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43888409-bce4-40b7-bd9c-4c505b3929b0-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.671160 4747 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.671169 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s54tw\" (UniqueName: \"kubernetes.io/projected/43888409-bce4-40b7-bd9c-4c505b3929b0-kube-api-access-s54tw\") on node \"crc\" DevicePath \"\"" Nov 26 13:21:40 crc kubenswrapper[4747]: I1126 13:21:40.683835 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mqnn7"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.178311 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" event={"ID":"20bd504f-0b9c-407b-968c-a2ef32da0158","Type":"ContainerStarted","Data":"6d276851caf96418834b4fe495b66c2b228b52d7c0745a18399e15cef10d771b"} Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.178803 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.178817 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" event={"ID":"20bd504f-0b9c-407b-968c-a2ef32da0158","Type":"ContainerStarted","Data":"b1ee58c24d2d64746fd13e43aed04ec6c9eda061772d8cfcd175c9525789cea1"} Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.180265 4747 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mqnn7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.180321 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" podUID="20bd504f-0b9c-407b-968c-a2ef32da0158" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.181092 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqbp5" event={"ID":"4d0101a0-a045-41c3-8387-7a84e8236d65","Type":"ContainerDied","Data":"badc636c97c9afc0e0f21ca7a3eb68d1f305c586eb1b89770678381c2175d13a"} Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.181120 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqbp5" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.181150 4747 scope.go:117] "RemoveContainer" containerID="0c0ff5a2b0a6151346be6928a092ce684f9b131155dd0b470ba747b8a1341814" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.186826 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hf2r6" event={"ID":"e56046c3-771a-4f54-afba-59f160f1e415","Type":"ContainerDied","Data":"e10d8680edd28da58948217b66d45997956c5a4b29753b8ed98f7da8ba904766"} Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.186903 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hf2r6" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.188591 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" event={"ID":"30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d","Type":"ContainerDied","Data":"0e42a246d604012be885003ca14b4435fb6f61157dd55ce238873d55910ff871"} Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.188731 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zqvkf" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.201353 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f7qf" event={"ID":"af471b2c-feb1-40af-bb70-4b41459277c3","Type":"ContainerDied","Data":"50fafa20d936823292db1af99b1368e00f0beaf52419dc270d0865aced316598"} Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.201489 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4f7qf" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.210294 4747 scope.go:117] "RemoveContainer" containerID="331688a7a890a58ce52abde05df70dd3b0e1f200c8ca94abe3ea46518f67be3d" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.214149 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s4vv" event={"ID":"43888409-bce4-40b7-bd9c-4c505b3929b0","Type":"ContainerDied","Data":"3fdb4fda7a0a0c8ce0212849b656416943ac3cb45a19534b7a0988553dc83a49"} Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.214324 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6s4vv" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.216533 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" podStartSLOduration=2.216514715 podStartE2EDuration="2.216514715s" podCreationTimestamp="2025-11-26 13:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:21:41.214571718 +0000 UTC m=+388.200882733" watchObservedRunningTime="2025-11-26 13:21:41.216514715 +0000 UTC m=+388.202825730" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.246504 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hf2r6"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.255926 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hf2r6"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.262524 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqbp5"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.265408 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqbp5"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.268330 4747 scope.go:117] "RemoveContainer" containerID="fe8a3b65da9659bb86de22759f21cfde0d93bfad3e6cc372dafe5936a87b57aa" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.282588 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqvkf"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.288866 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqvkf"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.295232 4747 scope.go:117] "RemoveContainer" containerID="403921d7cbdc4b0e8973ce49b40a023d0756e83c7cc656121199980438b46dc9" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.298974 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6s4vv"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.307158 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6s4vv"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.311501 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4f7qf"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.315866 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4f7qf"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.317980 4747 scope.go:117] "RemoveContainer" containerID="499a6e3b9c91e3f55c4658ac75cb61288245a1e1fcfe9bc99c08d4e4af173d7d" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.334067 4747 scope.go:117] "RemoveContainer" containerID="fc05f0e67799a6300012269ff17c6af42dbdf62a28d6f7d2ba30f46421e253ff" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.351516 4747 scope.go:117] "RemoveContainer" containerID="6f5be912877d3cc528e6d6d5458005076af34a582e9a9a4d7732b865158dd190" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.366967 4747 scope.go:117] "RemoveContainer" containerID="8f3e255b0ef929e08ea9d3c725e32c4a1a784d36d0f6e3460765fbcb13d182f4" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.379833 4747 scope.go:117] "RemoveContainer" containerID="af836cf0a7e51d8f703d7a056334992e8d18af71cbb19563d9311e5822712e39" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.393613 4747 scope.go:117] "RemoveContainer" containerID="0c78a08b96e38961f36b728671a374d5a9d6829552f2822549086c62666e5a5c" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.408028 4747 scope.go:117] "RemoveContainer" containerID="4861e583f15f0a681eb5b84d073ae394c2648fc575e2b60f085000e7aad749db" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.419671 4747 scope.go:117] "RemoveContainer" containerID="933656c5936d287eb1bb94f09d1293a2809d51957ab61394ec885e354138e455" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.434562 4747 scope.go:117] "RemoveContainer" containerID="fee48446d3e4c11ad79df68dbf940fa1ecfceff3733e9caebb87b1a4a2869309" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.807696 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" path="/var/lib/kubelet/pods/30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d/volumes" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.808966 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" path="/var/lib/kubelet/pods/43888409-bce4-40b7-bd9c-4c505b3929b0/volumes" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.810257 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" path="/var/lib/kubelet/pods/4d0101a0-a045-41c3-8387-7a84e8236d65/volumes" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.813697 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" path="/var/lib/kubelet/pods/af471b2c-feb1-40af-bb70-4b41459277c3/volumes" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.815636 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56046c3-771a-4f54-afba-59f160f1e415" path="/var/lib/kubelet/pods/e56046c3-771a-4f54-afba-59f160f1e415/volumes" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.907376 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tjwfc"] Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.907780 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" containerName="extract-content" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.907827 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" containerName="extract-content" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.907858 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" containerName="extract-utilities" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.907877 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" containerName="extract-utilities" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.907905 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerName="extract-utilities" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.907922 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerName="extract-utilities" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.907943 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" containerName="marketplace-operator" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.907958 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" containerName="marketplace-operator" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.907983 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerName="extract-utilities" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908000 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerName="extract-utilities" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.908029 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerName="extract-content" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908045 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerName="extract-content" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.908110 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56046c3-771a-4f54-afba-59f160f1e415" containerName="extract-utilities" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908128 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56046c3-771a-4f54-afba-59f160f1e415" containerName="extract-utilities" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.908155 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908173 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.908199 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908214 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.908238 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56046c3-771a-4f54-afba-59f160f1e415" containerName="extract-content" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908255 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56046c3-771a-4f54-afba-59f160f1e415" containerName="extract-content" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.908272 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerName="extract-content" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908332 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerName="extract-content" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.908355 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56046c3-771a-4f54-afba-59f160f1e415" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908372 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56046c3-771a-4f54-afba-59f160f1e415" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: E1126 13:21:41.908400 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908416 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908599 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="af471b2c-feb1-40af-bb70-4b41459277c3" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908620 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="43888409-bce4-40b7-bd9c-4c505b3929b0" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908642 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0101a0-a045-41c3-8387-7a84e8236d65" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908659 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56046c3-771a-4f54-afba-59f160f1e415" containerName="registry-server" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.908675 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e8ef3d-7cc7-43c3-9d16-8167d2ebc88d" containerName="marketplace-operator" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.909961 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.912697 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.925495 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjwfc"] Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.993099 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01124784-f0ee-42b1-82d1-18b591ee2e88-utilities\") pod \"redhat-marketplace-tjwfc\" (UID: \"01124784-f0ee-42b1-82d1-18b591ee2e88\") " pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.993199 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9cv2\" (UniqueName: \"kubernetes.io/projected/01124784-f0ee-42b1-82d1-18b591ee2e88-kube-api-access-x9cv2\") pod \"redhat-marketplace-tjwfc\" (UID: \"01124784-f0ee-42b1-82d1-18b591ee2e88\") " pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:41 crc kubenswrapper[4747]: I1126 13:21:41.993337 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01124784-f0ee-42b1-82d1-18b591ee2e88-catalog-content\") pod \"redhat-marketplace-tjwfc\" (UID: \"01124784-f0ee-42b1-82d1-18b591ee2e88\") " pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.096442 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01124784-f0ee-42b1-82d1-18b591ee2e88-catalog-content\") pod \"redhat-marketplace-tjwfc\" (UID: \"01124784-f0ee-42b1-82d1-18b591ee2e88\") " pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.096882 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01124784-f0ee-42b1-82d1-18b591ee2e88-utilities\") pod \"redhat-marketplace-tjwfc\" (UID: \"01124784-f0ee-42b1-82d1-18b591ee2e88\") " pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.097626 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01124784-f0ee-42b1-82d1-18b591ee2e88-utilities\") pod \"redhat-marketplace-tjwfc\" (UID: \"01124784-f0ee-42b1-82d1-18b591ee2e88\") " pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.097687 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01124784-f0ee-42b1-82d1-18b591ee2e88-catalog-content\") pod \"redhat-marketplace-tjwfc\" (UID: \"01124784-f0ee-42b1-82d1-18b591ee2e88\") " pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.098139 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9cv2\" (UniqueName: \"kubernetes.io/projected/01124784-f0ee-42b1-82d1-18b591ee2e88-kube-api-access-x9cv2\") pod \"redhat-marketplace-tjwfc\" (UID: \"01124784-f0ee-42b1-82d1-18b591ee2e88\") " pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.110647 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n6l85"] Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.115239 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.124783 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n6l85"] Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.127320 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.140103 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9cv2\" (UniqueName: \"kubernetes.io/projected/01124784-f0ee-42b1-82d1-18b591ee2e88-kube-api-access-x9cv2\") pod \"redhat-marketplace-tjwfc\" (UID: \"01124784-f0ee-42b1-82d1-18b591ee2e88\") " pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.198911 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75bd453-1704-4c67-aea4-e184f0a8b320-utilities\") pod \"certified-operators-n6l85\" (UID: \"e75bd453-1704-4c67-aea4-e184f0a8b320\") " pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.199503 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhh2m\" (UniqueName: \"kubernetes.io/projected/e75bd453-1704-4c67-aea4-e184f0a8b320-kube-api-access-xhh2m\") pod \"certified-operators-n6l85\" (UID: \"e75bd453-1704-4c67-aea4-e184f0a8b320\") " pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.199632 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75bd453-1704-4c67-aea4-e184f0a8b320-catalog-content\") pod \"certified-operators-n6l85\" (UID: \"e75bd453-1704-4c67-aea4-e184f0a8b320\") " pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.230952 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mqnn7" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.236991 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.300849 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhh2m\" (UniqueName: \"kubernetes.io/projected/e75bd453-1704-4c67-aea4-e184f0a8b320-kube-api-access-xhh2m\") pod \"certified-operators-n6l85\" (UID: \"e75bd453-1704-4c67-aea4-e184f0a8b320\") " pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.300950 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75bd453-1704-4c67-aea4-e184f0a8b320-catalog-content\") pod \"certified-operators-n6l85\" (UID: \"e75bd453-1704-4c67-aea4-e184f0a8b320\") " pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.301055 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75bd453-1704-4c67-aea4-e184f0a8b320-utilities\") pod \"certified-operators-n6l85\" (UID: \"e75bd453-1704-4c67-aea4-e184f0a8b320\") " pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.301586 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75bd453-1704-4c67-aea4-e184f0a8b320-utilities\") pod \"certified-operators-n6l85\" (UID: \"e75bd453-1704-4c67-aea4-e184f0a8b320\") " pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.301999 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75bd453-1704-4c67-aea4-e184f0a8b320-catalog-content\") pod \"certified-operators-n6l85\" (UID: \"e75bd453-1704-4c67-aea4-e184f0a8b320\") " pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.320007 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhh2m\" (UniqueName: \"kubernetes.io/projected/e75bd453-1704-4c67-aea4-e184f0a8b320-kube-api-access-xhh2m\") pod \"certified-operators-n6l85\" (UID: \"e75bd453-1704-4c67-aea4-e184f0a8b320\") " pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.441709 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.659667 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjwfc"] Nov 26 13:21:42 crc kubenswrapper[4747]: W1126 13:21:42.667353 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01124784_f0ee_42b1_82d1_18b591ee2e88.slice/crio-831d182cf64faf945b18f6dd9dc8ff9953243624a6fbfb2b1072016721a243a1 WatchSource:0}: Error finding container 831d182cf64faf945b18f6dd9dc8ff9953243624a6fbfb2b1072016721a243a1: Status 404 returned error can't find the container with id 831d182cf64faf945b18f6dd9dc8ff9953243624a6fbfb2b1072016721a243a1 Nov 26 13:21:42 crc kubenswrapper[4747]: I1126 13:21:42.820171 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n6l85"] Nov 26 13:21:42 crc kubenswrapper[4747]: W1126 13:21:42.820322 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode75bd453_1704_4c67_aea4_e184f0a8b320.slice/crio-c103bd0efe773a4d96c6b63b91e6680e1291631a4bc726e1186b4eff274fef99 WatchSource:0}: Error finding container c103bd0efe773a4d96c6b63b91e6680e1291631a4bc726e1186b4eff274fef99: Status 404 returned error can't find the container with id c103bd0efe773a4d96c6b63b91e6680e1291631a4bc726e1186b4eff274fef99 Nov 26 13:21:43 crc kubenswrapper[4747]: I1126 13:21:43.235323 4747 generic.go:334] "Generic (PLEG): container finished" podID="01124784-f0ee-42b1-82d1-18b591ee2e88" containerID="294dd170597701309ff55e4275c3437730931ab2ece0be8acc675b8f0ba065e7" exitCode=0 Nov 26 13:21:43 crc kubenswrapper[4747]: I1126 13:21:43.235374 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjwfc" event={"ID":"01124784-f0ee-42b1-82d1-18b591ee2e88","Type":"ContainerDied","Data":"294dd170597701309ff55e4275c3437730931ab2ece0be8acc675b8f0ba065e7"} Nov 26 13:21:43 crc kubenswrapper[4747]: I1126 13:21:43.235637 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjwfc" event={"ID":"01124784-f0ee-42b1-82d1-18b591ee2e88","Type":"ContainerStarted","Data":"831d182cf64faf945b18f6dd9dc8ff9953243624a6fbfb2b1072016721a243a1"} Nov 26 13:21:43 crc kubenswrapper[4747]: I1126 13:21:43.239024 4747 generic.go:334] "Generic (PLEG): container finished" podID="e75bd453-1704-4c67-aea4-e184f0a8b320" containerID="852250b6174f48d82b2f4a9bbbc17d2941f33266880cc8d537319fce869dda60" exitCode=0 Nov 26 13:21:43 crc kubenswrapper[4747]: I1126 13:21:43.239124 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6l85" event={"ID":"e75bd453-1704-4c67-aea4-e184f0a8b320","Type":"ContainerDied","Data":"852250b6174f48d82b2f4a9bbbc17d2941f33266880cc8d537319fce869dda60"} Nov 26 13:21:43 crc kubenswrapper[4747]: I1126 13:21:43.239157 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6l85" event={"ID":"e75bd453-1704-4c67-aea4-e184f0a8b320","Type":"ContainerStarted","Data":"c103bd0efe773a4d96c6b63b91e6680e1291631a4bc726e1186b4eff274fef99"} Nov 26 13:21:43 crc kubenswrapper[4747]: I1126 13:21:43.303388 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qsbxg" Nov 26 13:21:43 crc kubenswrapper[4747]: I1126 13:21:43.351049 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sddmq"] Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.245409 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6l85" event={"ID":"e75bd453-1704-4c67-aea4-e184f0a8b320","Type":"ContainerStarted","Data":"21538583ede23047537575cef92e46f0be5fe98d746d874c812ad2d7dff699e5"} Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.246802 4747 generic.go:334] "Generic (PLEG): container finished" podID="01124784-f0ee-42b1-82d1-18b591ee2e88" containerID="565f367bc466e7eca72c92cf9900f66dc9e3e86b0faaf06a242e39c56a51c716" exitCode=0 Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.246835 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjwfc" event={"ID":"01124784-f0ee-42b1-82d1-18b591ee2e88","Type":"ContainerDied","Data":"565f367bc466e7eca72c92cf9900f66dc9e3e86b0faaf06a242e39c56a51c716"} Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.307891 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ppv6b"] Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.312240 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.313651 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ppv6b"] Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.317677 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.431825 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542f8b1f-6f29-4d0c-83b4-9aadfba039ff-catalog-content\") pod \"redhat-operators-ppv6b\" (UID: \"542f8b1f-6f29-4d0c-83b4-9aadfba039ff\") " pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.431879 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd2v5\" (UniqueName: \"kubernetes.io/projected/542f8b1f-6f29-4d0c-83b4-9aadfba039ff-kube-api-access-dd2v5\") pod \"redhat-operators-ppv6b\" (UID: \"542f8b1f-6f29-4d0c-83b4-9aadfba039ff\") " pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.431982 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542f8b1f-6f29-4d0c-83b4-9aadfba039ff-utilities\") pod \"redhat-operators-ppv6b\" (UID: \"542f8b1f-6f29-4d0c-83b4-9aadfba039ff\") " pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.523780 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-96vsv"] Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.526730 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-96vsv"] Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.527155 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.557875 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542f8b1f-6f29-4d0c-83b4-9aadfba039ff-utilities\") pod \"redhat-operators-ppv6b\" (UID: \"542f8b1f-6f29-4d0c-83b4-9aadfba039ff\") " pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.557962 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542f8b1f-6f29-4d0c-83b4-9aadfba039ff-catalog-content\") pod \"redhat-operators-ppv6b\" (UID: \"542f8b1f-6f29-4d0c-83b4-9aadfba039ff\") " pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.558000 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd2v5\" (UniqueName: \"kubernetes.io/projected/542f8b1f-6f29-4d0c-83b4-9aadfba039ff-kube-api-access-dd2v5\") pod \"redhat-operators-ppv6b\" (UID: \"542f8b1f-6f29-4d0c-83b4-9aadfba039ff\") " pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.558778 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542f8b1f-6f29-4d0c-83b4-9aadfba039ff-utilities\") pod \"redhat-operators-ppv6b\" (UID: \"542f8b1f-6f29-4d0c-83b4-9aadfba039ff\") " pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.558988 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542f8b1f-6f29-4d0c-83b4-9aadfba039ff-catalog-content\") pod \"redhat-operators-ppv6b\" (UID: \"542f8b1f-6f29-4d0c-83b4-9aadfba039ff\") " pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.559277 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.588199 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd2v5\" (UniqueName: \"kubernetes.io/projected/542f8b1f-6f29-4d0c-83b4-9aadfba039ff-kube-api-access-dd2v5\") pod \"redhat-operators-ppv6b\" (UID: \"542f8b1f-6f29-4d0c-83b4-9aadfba039ff\") " pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.660751 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d138c6d5-a89a-4ae6-9c33-dd7ac74ee466-utilities\") pod \"community-operators-96vsv\" (UID: \"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466\") " pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.660815 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d138c6d5-a89a-4ae6-9c33-dd7ac74ee466-catalog-content\") pod \"community-operators-96vsv\" (UID: \"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466\") " pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.660916 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wppv9\" (UniqueName: \"kubernetes.io/projected/d138c6d5-a89a-4ae6-9c33-dd7ac74ee466-kube-api-access-wppv9\") pod \"community-operators-96vsv\" (UID: \"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466\") " pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.666435 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.762612 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d138c6d5-a89a-4ae6-9c33-dd7ac74ee466-utilities\") pod \"community-operators-96vsv\" (UID: \"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466\") " pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.762855 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d138c6d5-a89a-4ae6-9c33-dd7ac74ee466-catalog-content\") pod \"community-operators-96vsv\" (UID: \"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466\") " pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.762891 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wppv9\" (UniqueName: \"kubernetes.io/projected/d138c6d5-a89a-4ae6-9c33-dd7ac74ee466-kube-api-access-wppv9\") pod \"community-operators-96vsv\" (UID: \"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466\") " pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.763759 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d138c6d5-a89a-4ae6-9c33-dd7ac74ee466-catalog-content\") pod \"community-operators-96vsv\" (UID: \"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466\") " pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.763870 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d138c6d5-a89a-4ae6-9c33-dd7ac74ee466-utilities\") pod \"community-operators-96vsv\" (UID: \"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466\") " pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.786043 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wppv9\" (UniqueName: \"kubernetes.io/projected/d138c6d5-a89a-4ae6-9c33-dd7ac74ee466-kube-api-access-wppv9\") pod \"community-operators-96vsv\" (UID: \"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466\") " pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:44 crc kubenswrapper[4747]: I1126 13:21:44.915253 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:45 crc kubenswrapper[4747]: I1126 13:21:45.089855 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ppv6b"] Nov 26 13:21:45 crc kubenswrapper[4747]: I1126 13:21:45.253379 4747 generic.go:334] "Generic (PLEG): container finished" podID="e75bd453-1704-4c67-aea4-e184f0a8b320" containerID="21538583ede23047537575cef92e46f0be5fe98d746d874c812ad2d7dff699e5" exitCode=0 Nov 26 13:21:45 crc kubenswrapper[4747]: I1126 13:21:45.253416 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6l85" event={"ID":"e75bd453-1704-4c67-aea4-e184f0a8b320","Type":"ContainerDied","Data":"21538583ede23047537575cef92e46f0be5fe98d746d874c812ad2d7dff699e5"} Nov 26 13:21:45 crc kubenswrapper[4747]: I1126 13:21:45.259066 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppv6b" event={"ID":"542f8b1f-6f29-4d0c-83b4-9aadfba039ff","Type":"ContainerStarted","Data":"fcfedbb9a7ddf8cf8c20623fe428eea03e30c579903e3021252e65685f100843"} Nov 26 13:21:45 crc kubenswrapper[4747]: I1126 13:21:45.259101 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppv6b" event={"ID":"542f8b1f-6f29-4d0c-83b4-9aadfba039ff","Type":"ContainerStarted","Data":"59c27ac5bf852496f49b7a1bb175f3f98bec9acfe15e0d0f30e492eab0ddb1ea"} Nov 26 13:21:45 crc kubenswrapper[4747]: I1126 13:21:45.263737 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjwfc" event={"ID":"01124784-f0ee-42b1-82d1-18b591ee2e88","Type":"ContainerStarted","Data":"249fe13b3f492f5cd127c1de8e7949623347dd7b3088670b12dd2a10e2204d6e"} Nov 26 13:21:45 crc kubenswrapper[4747]: I1126 13:21:45.300776 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-96vsv"] Nov 26 13:21:45 crc kubenswrapper[4747]: I1126 13:21:45.312163 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tjwfc" podStartSLOduration=2.846658524 podStartE2EDuration="4.312152031s" podCreationTimestamp="2025-11-26 13:21:41 +0000 UTC" firstStartedPulling="2025-11-26 13:21:43.238123185 +0000 UTC m=+390.224434200" lastFinishedPulling="2025-11-26 13:21:44.703616682 +0000 UTC m=+391.689927707" observedRunningTime="2025-11-26 13:21:45.310254534 +0000 UTC m=+392.296565569" watchObservedRunningTime="2025-11-26 13:21:45.312152031 +0000 UTC m=+392.298463046" Nov 26 13:21:45 crc kubenswrapper[4747]: W1126 13:21:45.330955 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd138c6d5_a89a_4ae6_9c33_dd7ac74ee466.slice/crio-6976bc8cf4c35156c06e7d9bc97d75c92fd83e1750b3f5b73aba275b1dd076bd WatchSource:0}: Error finding container 6976bc8cf4c35156c06e7d9bc97d75c92fd83e1750b3f5b73aba275b1dd076bd: Status 404 returned error can't find the container with id 6976bc8cf4c35156c06e7d9bc97d75c92fd83e1750b3f5b73aba275b1dd076bd Nov 26 13:21:46 crc kubenswrapper[4747]: I1126 13:21:46.271381 4747 generic.go:334] "Generic (PLEG): container finished" podID="542f8b1f-6f29-4d0c-83b4-9aadfba039ff" containerID="fcfedbb9a7ddf8cf8c20623fe428eea03e30c579903e3021252e65685f100843" exitCode=0 Nov 26 13:21:46 crc kubenswrapper[4747]: I1126 13:21:46.271450 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppv6b" event={"ID":"542f8b1f-6f29-4d0c-83b4-9aadfba039ff","Type":"ContainerDied","Data":"fcfedbb9a7ddf8cf8c20623fe428eea03e30c579903e3021252e65685f100843"} Nov 26 13:21:46 crc kubenswrapper[4747]: I1126 13:21:46.276588 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6l85" event={"ID":"e75bd453-1704-4c67-aea4-e184f0a8b320","Type":"ContainerStarted","Data":"d7f496d73192b50a822dcf673d87606391df8786ca025183a632d3ea2d7ef120"} Nov 26 13:21:46 crc kubenswrapper[4747]: I1126 13:21:46.279214 4747 generic.go:334] "Generic (PLEG): container finished" podID="d138c6d5-a89a-4ae6-9c33-dd7ac74ee466" containerID="78eb979e5912190903890adf8415bdfbc5b3b0f40245c20ebe83b149233d0658" exitCode=0 Nov 26 13:21:46 crc kubenswrapper[4747]: I1126 13:21:46.280518 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96vsv" event={"ID":"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466","Type":"ContainerDied","Data":"78eb979e5912190903890adf8415bdfbc5b3b0f40245c20ebe83b149233d0658"} Nov 26 13:21:46 crc kubenswrapper[4747]: I1126 13:21:46.280559 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96vsv" event={"ID":"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466","Type":"ContainerStarted","Data":"6976bc8cf4c35156c06e7d9bc97d75c92fd83e1750b3f5b73aba275b1dd076bd"} Nov 26 13:21:46 crc kubenswrapper[4747]: I1126 13:21:46.306108 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n6l85" podStartSLOduration=1.7330796510000002 podStartE2EDuration="4.306092411s" podCreationTimestamp="2025-11-26 13:21:42 +0000 UTC" firstStartedPulling="2025-11-26 13:21:43.24120542 +0000 UTC m=+390.227516435" lastFinishedPulling="2025-11-26 13:21:45.81421818 +0000 UTC m=+392.800529195" observedRunningTime="2025-11-26 13:21:46.304658586 +0000 UTC m=+393.290969601" watchObservedRunningTime="2025-11-26 13:21:46.306092411 +0000 UTC m=+393.292403446" Nov 26 13:21:47 crc kubenswrapper[4747]: I1126 13:21:47.287099 4747 generic.go:334] "Generic (PLEG): container finished" podID="d138c6d5-a89a-4ae6-9c33-dd7ac74ee466" containerID="6114c9a9d1b019c037fc009e4d99545ecf5094dc81ab0f04ddaadd43943efef1" exitCode=0 Nov 26 13:21:47 crc kubenswrapper[4747]: I1126 13:21:47.287148 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96vsv" event={"ID":"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466","Type":"ContainerDied","Data":"6114c9a9d1b019c037fc009e4d99545ecf5094dc81ab0f04ddaadd43943efef1"} Nov 26 13:21:47 crc kubenswrapper[4747]: I1126 13:21:47.290591 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppv6b" event={"ID":"542f8b1f-6f29-4d0c-83b4-9aadfba039ff","Type":"ContainerStarted","Data":"24f520647f1b702c6dae799aaafb5ecd2e481b2aac83a5b444a6dda818223740"} Nov 26 13:21:48 crc kubenswrapper[4747]: I1126 13:21:48.299688 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96vsv" event={"ID":"d138c6d5-a89a-4ae6-9c33-dd7ac74ee466","Type":"ContainerStarted","Data":"5f95ab8000ccc668b7e5b4e2ce67e5b9b3d3c1aaaf43f3bfe93590fee3b497a5"} Nov 26 13:21:48 crc kubenswrapper[4747]: I1126 13:21:48.301566 4747 generic.go:334] "Generic (PLEG): container finished" podID="542f8b1f-6f29-4d0c-83b4-9aadfba039ff" containerID="24f520647f1b702c6dae799aaafb5ecd2e481b2aac83a5b444a6dda818223740" exitCode=0 Nov 26 13:21:48 crc kubenswrapper[4747]: I1126 13:21:48.301609 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppv6b" event={"ID":"542f8b1f-6f29-4d0c-83b4-9aadfba039ff","Type":"ContainerDied","Data":"24f520647f1b702c6dae799aaafb5ecd2e481b2aac83a5b444a6dda818223740"} Nov 26 13:21:48 crc kubenswrapper[4747]: I1126 13:21:48.333698 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-96vsv" podStartSLOduration=2.852120899 podStartE2EDuration="4.333684146s" podCreationTimestamp="2025-11-26 13:21:44 +0000 UTC" firstStartedPulling="2025-11-26 13:21:46.281206636 +0000 UTC m=+393.267517681" lastFinishedPulling="2025-11-26 13:21:47.762769913 +0000 UTC m=+394.749080928" observedRunningTime="2025-11-26 13:21:48.323445247 +0000 UTC m=+395.309756292" watchObservedRunningTime="2025-11-26 13:21:48.333684146 +0000 UTC m=+395.319995161" Nov 26 13:21:49 crc kubenswrapper[4747]: I1126 13:21:49.314468 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppv6b" event={"ID":"542f8b1f-6f29-4d0c-83b4-9aadfba039ff","Type":"ContainerStarted","Data":"8036c8d705675b8839241c92bf3cf9b1519d7e5444a9b2d7e36fb61c3dc70141"} Nov 26 13:21:49 crc kubenswrapper[4747]: I1126 13:21:49.341257 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ppv6b" podStartSLOduration=2.896153259 podStartE2EDuration="5.341241637s" podCreationTimestamp="2025-11-26 13:21:44 +0000 UTC" firstStartedPulling="2025-11-26 13:21:46.273288863 +0000 UTC m=+393.259599918" lastFinishedPulling="2025-11-26 13:21:48.718377251 +0000 UTC m=+395.704688296" observedRunningTime="2025-11-26 13:21:49.339850944 +0000 UTC m=+396.326161969" watchObservedRunningTime="2025-11-26 13:21:49.341241637 +0000 UTC m=+396.327552652" Nov 26 13:21:52 crc kubenswrapper[4747]: I1126 13:21:52.238697 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:52 crc kubenswrapper[4747]: I1126 13:21:52.239491 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:52 crc kubenswrapper[4747]: I1126 13:21:52.299383 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:52 crc kubenswrapper[4747]: I1126 13:21:52.385134 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tjwfc" Nov 26 13:21:52 crc kubenswrapper[4747]: I1126 13:21:52.442391 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:52 crc kubenswrapper[4747]: I1126 13:21:52.442458 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:52 crc kubenswrapper[4747]: I1126 13:21:52.504292 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:53 crc kubenswrapper[4747]: I1126 13:21:53.389903 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n6l85" Nov 26 13:21:54 crc kubenswrapper[4747]: I1126 13:21:54.667516 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:54 crc kubenswrapper[4747]: I1126 13:21:54.667581 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:21:54 crc kubenswrapper[4747]: I1126 13:21:54.915513 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:54 crc kubenswrapper[4747]: I1126 13:21:54.915868 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:54 crc kubenswrapper[4747]: I1126 13:21:54.988049 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:55 crc kubenswrapper[4747]: I1126 13:21:55.425979 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-96vsv" Nov 26 13:21:55 crc kubenswrapper[4747]: I1126 13:21:55.738325 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ppv6b" podUID="542f8b1f-6f29-4d0c-83b4-9aadfba039ff" containerName="registry-server" probeResult="failure" output=< Nov 26 13:21:55 crc kubenswrapper[4747]: timeout: failed to connect service ":50051" within 1s Nov 26 13:21:55 crc kubenswrapper[4747]: > Nov 26 13:22:03 crc kubenswrapper[4747]: I1126 13:22:03.417496 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:22:03 crc kubenswrapper[4747]: I1126 13:22:03.418165 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:22:04 crc kubenswrapper[4747]: I1126 13:22:04.734956 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:22:04 crc kubenswrapper[4747]: I1126 13:22:04.814397 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ppv6b" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.401263 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" podUID="ea6220c2-d975-49c7-86c4-d71c809cc426" containerName="registry" containerID="cri-o://ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f" gracePeriod=30 Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.795683 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.868037 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-bound-sa-token\") pod \"ea6220c2-d975-49c7-86c4-d71c809cc426\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.868399 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ea6220c2-d975-49c7-86c4-d71c809cc426\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.868435 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea6220c2-d975-49c7-86c4-d71c809cc426-installation-pull-secrets\") pod \"ea6220c2-d975-49c7-86c4-d71c809cc426\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.868471 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-certificates\") pod \"ea6220c2-d975-49c7-86c4-d71c809cc426\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.868497 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-tls\") pod \"ea6220c2-d975-49c7-86c4-d71c809cc426\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.868582 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-trusted-ca\") pod \"ea6220c2-d975-49c7-86c4-d71c809cc426\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.868760 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhj4s\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-kube-api-access-vhj4s\") pod \"ea6220c2-d975-49c7-86c4-d71c809cc426\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.868821 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea6220c2-d975-49c7-86c4-d71c809cc426-ca-trust-extracted\") pod \"ea6220c2-d975-49c7-86c4-d71c809cc426\" (UID: \"ea6220c2-d975-49c7-86c4-d71c809cc426\") " Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.869212 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ea6220c2-d975-49c7-86c4-d71c809cc426" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.869427 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ea6220c2-d975-49c7-86c4-d71c809cc426" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.881300 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ea6220c2-d975-49c7-86c4-d71c809cc426" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.885357 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-kube-api-access-vhj4s" (OuterVolumeSpecName: "kube-api-access-vhj4s") pod "ea6220c2-d975-49c7-86c4-d71c809cc426" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426"). InnerVolumeSpecName "kube-api-access-vhj4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.885752 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ea6220c2-d975-49c7-86c4-d71c809cc426" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.887305 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6220c2-d975-49c7-86c4-d71c809cc426-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ea6220c2-d975-49c7-86c4-d71c809cc426" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.889210 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea6220c2-d975-49c7-86c4-d71c809cc426-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ea6220c2-d975-49c7-86c4-d71c809cc426" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.901890 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ea6220c2-d975-49c7-86c4-d71c809cc426" (UID: "ea6220c2-d975-49c7-86c4-d71c809cc426"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.971107 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.971155 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhj4s\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-kube-api-access-vhj4s\") on node \"crc\" DevicePath \"\"" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.971178 4747 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea6220c2-d975-49c7-86c4-d71c809cc426-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.971193 4747 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.971206 4747 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea6220c2-d975-49c7-86c4-d71c809cc426-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.971217 4747 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 26 13:22:08 crc kubenswrapper[4747]: I1126 13:22:08.971227 4747 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea6220c2-d975-49c7-86c4-d71c809cc426-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:22:09 crc kubenswrapper[4747]: I1126 13:22:09.445425 4747 generic.go:334] "Generic (PLEG): container finished" podID="ea6220c2-d975-49c7-86c4-d71c809cc426" containerID="ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f" exitCode=0 Nov 26 13:22:09 crc kubenswrapper[4747]: I1126 13:22:09.445507 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" event={"ID":"ea6220c2-d975-49c7-86c4-d71c809cc426","Type":"ContainerDied","Data":"ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f"} Nov 26 13:22:09 crc kubenswrapper[4747]: I1126 13:22:09.446294 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" event={"ID":"ea6220c2-d975-49c7-86c4-d71c809cc426","Type":"ContainerDied","Data":"444837625b3ad46c758bfbcd203c993b1c66811b637c608b9ba915e4fc618790"} Nov 26 13:22:09 crc kubenswrapper[4747]: I1126 13:22:09.446339 4747 scope.go:117] "RemoveContainer" containerID="ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f" Nov 26 13:22:09 crc kubenswrapper[4747]: I1126 13:22:09.445612 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sddmq" Nov 26 13:22:09 crc kubenswrapper[4747]: I1126 13:22:09.481177 4747 scope.go:117] "RemoveContainer" containerID="ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f" Nov 26 13:22:09 crc kubenswrapper[4747]: E1126 13:22:09.482702 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f\": container with ID starting with ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f not found: ID does not exist" containerID="ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f" Nov 26 13:22:09 crc kubenswrapper[4747]: I1126 13:22:09.482760 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f"} err="failed to get container status \"ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f\": rpc error: code = NotFound desc = could not find container \"ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f\": container with ID starting with ad485fbe3d3ff1cbf343e74d82986b02db2109544325b7b2e2527873a5bc356f not found: ID does not exist" Nov 26 13:22:09 crc kubenswrapper[4747]: I1126 13:22:09.514701 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sddmq"] Nov 26 13:22:09 crc kubenswrapper[4747]: I1126 13:22:09.525349 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sddmq"] Nov 26 13:22:09 crc kubenswrapper[4747]: I1126 13:22:09.810390 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea6220c2-d975-49c7-86c4-d71c809cc426" path="/var/lib/kubelet/pods/ea6220c2-d975-49c7-86c4-d71c809cc426/volumes" Nov 26 13:22:33 crc kubenswrapper[4747]: I1126 13:22:33.418135 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:22:33 crc kubenswrapper[4747]: I1126 13:22:33.418711 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:22:33 crc kubenswrapper[4747]: I1126 13:22:33.418767 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:22:33 crc kubenswrapper[4747]: I1126 13:22:33.419402 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11b7db12d54852688d92cc54c020e09696dea5ae8e3f8c5325e53a455b249bb1"} pod="openshift-machine-config-operator/machine-config-daemon-hjc55" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:22:33 crc kubenswrapper[4747]: I1126 13:22:33.419467 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" containerID="cri-o://11b7db12d54852688d92cc54c020e09696dea5ae8e3f8c5325e53a455b249bb1" gracePeriod=600 Nov 26 13:22:33 crc kubenswrapper[4747]: I1126 13:22:33.610443 4747 generic.go:334] "Generic (PLEG): container finished" podID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerID="11b7db12d54852688d92cc54c020e09696dea5ae8e3f8c5325e53a455b249bb1" exitCode=0 Nov 26 13:22:33 crc kubenswrapper[4747]: I1126 13:22:33.610537 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerDied","Data":"11b7db12d54852688d92cc54c020e09696dea5ae8e3f8c5325e53a455b249bb1"} Nov 26 13:22:33 crc kubenswrapper[4747]: I1126 13:22:33.610883 4747 scope.go:117] "RemoveContainer" containerID="6a231bb8e8914109325dc51e937c0c5343e63063fd5cd60fcc6d825181dbb023" Nov 26 13:22:34 crc kubenswrapper[4747]: I1126 13:22:34.619639 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"4be8b177f6bc1271ff63b8de2d2fb713f786e5dfb1bab928f43d6a0e43bde301"} Nov 26 13:24:33 crc kubenswrapper[4747]: I1126 13:24:33.418026 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:24:33 crc kubenswrapper[4747]: I1126 13:24:33.418888 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:25:03 crc kubenswrapper[4747]: I1126 13:25:03.417273 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:25:03 crc kubenswrapper[4747]: I1126 13:25:03.417799 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:25:33 crc kubenswrapper[4747]: I1126 13:25:33.418186 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:25:33 crc kubenswrapper[4747]: I1126 13:25:33.418919 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:25:33 crc kubenswrapper[4747]: I1126 13:25:33.418987 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:25:33 crc kubenswrapper[4747]: I1126 13:25:33.419927 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4be8b177f6bc1271ff63b8de2d2fb713f786e5dfb1bab928f43d6a0e43bde301"} pod="openshift-machine-config-operator/machine-config-daemon-hjc55" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:25:33 crc kubenswrapper[4747]: I1126 13:25:33.420034 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" containerID="cri-o://4be8b177f6bc1271ff63b8de2d2fb713f786e5dfb1bab928f43d6a0e43bde301" gracePeriod=600 Nov 26 13:25:34 crc kubenswrapper[4747]: I1126 13:25:34.279257 4747 generic.go:334] "Generic (PLEG): container finished" podID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerID="4be8b177f6bc1271ff63b8de2d2fb713f786e5dfb1bab928f43d6a0e43bde301" exitCode=0 Nov 26 13:25:34 crc kubenswrapper[4747]: I1126 13:25:34.279366 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerDied","Data":"4be8b177f6bc1271ff63b8de2d2fb713f786e5dfb1bab928f43d6a0e43bde301"} Nov 26 13:25:34 crc kubenswrapper[4747]: I1126 13:25:34.279903 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"3e39e361e216e7b4e34fbd9b34e29ae07130ba1170214f6c8535aa4f81b6ff77"} Nov 26 13:25:34 crc kubenswrapper[4747]: I1126 13:25:34.279944 4747 scope.go:117] "RemoveContainer" containerID="11b7db12d54852688d92cc54c020e09696dea5ae8e3f8c5325e53a455b249bb1" Nov 26 13:27:33 crc kubenswrapper[4747]: I1126 13:27:33.418202 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:27:33 crc kubenswrapper[4747]: I1126 13:27:33.419100 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:27:48 crc kubenswrapper[4747]: I1126 13:27:48.430438 4747 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 26 13:28:03 crc kubenswrapper[4747]: I1126 13:28:03.417936 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:28:03 crc kubenswrapper[4747]: I1126 13:28:03.418750 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:28:33 crc kubenswrapper[4747]: I1126 13:28:33.418189 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:28:33 crc kubenswrapper[4747]: I1126 13:28:33.418857 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:28:33 crc kubenswrapper[4747]: I1126 13:28:33.418922 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:28:33 crc kubenswrapper[4747]: I1126 13:28:33.419800 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e39e361e216e7b4e34fbd9b34e29ae07130ba1170214f6c8535aa4f81b6ff77"} pod="openshift-machine-config-operator/machine-config-daemon-hjc55" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:28:33 crc kubenswrapper[4747]: I1126 13:28:33.419899 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" containerID="cri-o://3e39e361e216e7b4e34fbd9b34e29ae07130ba1170214f6c8535aa4f81b6ff77" gracePeriod=600 Nov 26 13:28:34 crc kubenswrapper[4747]: I1126 13:28:34.499865 4747 generic.go:334] "Generic (PLEG): container finished" podID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerID="3e39e361e216e7b4e34fbd9b34e29ae07130ba1170214f6c8535aa4f81b6ff77" exitCode=0 Nov 26 13:28:34 crc kubenswrapper[4747]: I1126 13:28:34.499932 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerDied","Data":"3e39e361e216e7b4e34fbd9b34e29ae07130ba1170214f6c8535aa4f81b6ff77"} Nov 26 13:28:34 crc kubenswrapper[4747]: I1126 13:28:34.500311 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"18a51e290df690603e34ca806c79b649af6148fb3c9197b6098b541a9c3c88de"} Nov 26 13:28:34 crc kubenswrapper[4747]: I1126 13:28:34.500327 4747 scope.go:117] "RemoveContainer" containerID="4be8b177f6bc1271ff63b8de2d2fb713f786e5dfb1bab928f43d6a0e43bde301" Nov 26 13:29:30 crc kubenswrapper[4747]: I1126 13:29:30.849663 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m4wml"] Nov 26 13:29:30 crc kubenswrapper[4747]: I1126 13:29:30.850963 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovn-controller" containerID="cri-o://9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482" gracePeriod=30 Nov 26 13:29:30 crc kubenswrapper[4747]: I1126 13:29:30.851235 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="nbdb" containerID="cri-o://d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a" gracePeriod=30 Nov 26 13:29:30 crc kubenswrapper[4747]: I1126 13:29:30.851296 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="northd" containerID="cri-o://6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4" gracePeriod=30 Nov 26 13:29:30 crc kubenswrapper[4747]: I1126 13:29:30.851274 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df" gracePeriod=30 Nov 26 13:29:30 crc kubenswrapper[4747]: I1126 13:29:30.851377 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovn-acl-logging" containerID="cri-o://90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54" gracePeriod=30 Nov 26 13:29:30 crc kubenswrapper[4747]: I1126 13:29:30.851448 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="kube-rbac-proxy-node" containerID="cri-o://bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766" gracePeriod=30 Nov 26 13:29:30 crc kubenswrapper[4747]: I1126 13:29:30.851350 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="sbdb" containerID="cri-o://222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5" gracePeriod=30 Nov 26 13:29:30 crc kubenswrapper[4747]: I1126 13:29:30.903437 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" containerID="cri-o://581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc" gracePeriod=30 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.139804 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/3.log" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.142580 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovn-acl-logging/0.log" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.143289 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovn-controller/0.log" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.143845 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191618 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rw6c6"] Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191809 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="nbdb" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191821 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="nbdb" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191830 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="kube-rbac-proxy-node" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191837 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="kube-rbac-proxy-node" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191845 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea6220c2-d975-49c7-86c4-d71c809cc426" containerName="registry" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191851 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6220c2-d975-49c7-86c4-d71c809cc426" containerName="registry" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191858 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191864 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191871 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="kubecfg-setup" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191877 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="kubecfg-setup" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191887 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="kube-rbac-proxy-ovn-metrics" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191893 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="kube-rbac-proxy-ovn-metrics" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191900 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191905 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191912 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="northd" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191917 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="northd" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191925 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovn-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191930 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovn-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191943 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovn-acl-logging" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191949 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovn-acl-logging" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191957 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="sbdb" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191962 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="sbdb" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.191968 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.191974 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192106 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea6220c2-d975-49c7-86c4-d71c809cc426" containerName="registry" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192119 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192128 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192138 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="kube-rbac-proxy-ovn-metrics" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192148 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="sbdb" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192157 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovn-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192163 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192169 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovn-acl-logging" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192175 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="kube-rbac-proxy-node" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192181 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="nbdb" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192191 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="northd" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.192272 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192279 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192364 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192373 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: E1126 13:29:31.192447 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.192453 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerName="ovnkube-controller" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.193767 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.255847 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-config\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.255941 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-log-socket\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.255970 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-ovn\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.255996 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-netns\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256030 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-ovn-kubernetes\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256086 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-bin\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256107 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-openvswitch\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256108 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256122 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-env-overrides\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256098 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256255 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cm5m\" (UniqueName: \"kubernetes.io/projected/59482207-ba7e-4b71-a40b-968d8e3dcb8b-kube-api-access-2cm5m\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256160 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256183 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256188 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256224 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-log-socket" (OuterVolumeSpecName: "log-socket") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256294 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-script-lib\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256445 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256446 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256486 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256494 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-systemd\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256542 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-slash\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256571 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-kubelet\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256592 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-slash" (OuterVolumeSpecName: "host-slash") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256602 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-systemd-units\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256623 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256624 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256675 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-etc-openvswitch\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256711 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovn-node-metrics-cert\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256740 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-node-log\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256747 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256756 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256767 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-netd\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256790 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-node-log" (OuterVolumeSpecName: "node-log") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256794 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-var-lib-openvswitch\") pod \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\" (UID: \"59482207-ba7e-4b71-a40b-968d8e3dcb8b\") " Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256823 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256859 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256886 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.256980 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3be4b8e5-397a-48bb-8d14-bc3609797b2b-ovn-node-metrics-cert\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257024 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-run-ovn-kubernetes\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257083 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257118 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-node-log\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257166 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-kubelet\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257194 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-run-systemd\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257249 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-var-lib-openvswitch\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257280 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3be4b8e5-397a-48bb-8d14-bc3609797b2b-ovnkube-config\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257314 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-run-openvswitch\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257475 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-run-netns\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257523 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9sr9\" (UniqueName: \"kubernetes.io/projected/3be4b8e5-397a-48bb-8d14-bc3609797b2b-kube-api-access-p9sr9\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257576 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-slash\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257601 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3be4b8e5-397a-48bb-8d14-bc3609797b2b-env-overrides\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257652 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-systemd-units\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257690 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-run-ovn\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257770 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-cni-netd\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257808 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-log-socket\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257927 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-cni-bin\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.257988 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3be4b8e5-397a-48bb-8d14-bc3609797b2b-ovnkube-script-lib\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258046 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-etc-openvswitch\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258175 4747 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-slash\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258194 4747 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258210 4747 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258225 4747 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258240 4747 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-node-log\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258257 4747 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258272 4747 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258286 4747 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258300 4747 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-log-socket\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258315 4747 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258330 4747 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258343 4747 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258355 4747 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258367 4747 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258377 4747 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258388 4747 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.258400 4747 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.262048 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.262492 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59482207-ba7e-4b71-a40b-968d8e3dcb8b-kube-api-access-2cm5m" (OuterVolumeSpecName: "kube-api-access-2cm5m") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "kube-api-access-2cm5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.272327 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "59482207-ba7e-4b71-a40b-968d8e3dcb8b" (UID: "59482207-ba7e-4b71-a40b-968d8e3dcb8b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359670 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-systemd-units\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359735 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-run-ovn\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359791 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-systemd-units\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359850 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-cni-netd\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359882 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-log-socket\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359918 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-cni-bin\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359928 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-run-ovn\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359953 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3be4b8e5-397a-48bb-8d14-bc3609797b2b-ovnkube-script-lib\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359977 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-cni-bin\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359958 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-log-socket\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.359986 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-etc-openvswitch\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360013 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-etc-openvswitch\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360033 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3be4b8e5-397a-48bb-8d14-bc3609797b2b-ovn-node-metrics-cert\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360123 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-cni-netd\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360161 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-run-ovn-kubernetes\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360228 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360266 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-node-log\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360266 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-run-ovn-kubernetes\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360303 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-kubelet\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360326 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-node-log\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360334 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-run-systemd\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360356 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-run-systemd\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360305 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360365 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-var-lib-openvswitch\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360406 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-var-lib-openvswitch\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360371 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-kubelet\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360442 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3be4b8e5-397a-48bb-8d14-bc3609797b2b-ovnkube-config\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360479 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-run-openvswitch\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360519 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-run-netns\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360552 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9sr9\" (UniqueName: \"kubernetes.io/projected/3be4b8e5-397a-48bb-8d14-bc3609797b2b-kube-api-access-p9sr9\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360575 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-run-netns\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360590 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-slash\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360574 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-run-openvswitch\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360620 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3be4b8e5-397a-48bb-8d14-bc3609797b2b-env-overrides\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360784 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cm5m\" (UniqueName: \"kubernetes.io/projected/59482207-ba7e-4b71-a40b-968d8e3dcb8b-kube-api-access-2cm5m\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360814 4747 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59482207-ba7e-4b71-a40b-968d8e3dcb8b-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360837 4747 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59482207-ba7e-4b71-a40b-968d8e3dcb8b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.360849 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3be4b8e5-397a-48bb-8d14-bc3609797b2b-host-slash\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.361288 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3be4b8e5-397a-48bb-8d14-bc3609797b2b-ovnkube-script-lib\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.361316 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3be4b8e5-397a-48bb-8d14-bc3609797b2b-env-overrides\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.361381 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3be4b8e5-397a-48bb-8d14-bc3609797b2b-ovnkube-config\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.366485 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3be4b8e5-397a-48bb-8d14-bc3609797b2b-ovn-node-metrics-cert\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.380983 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9sr9\" (UniqueName: \"kubernetes.io/projected/3be4b8e5-397a-48bb-8d14-bc3609797b2b-kube-api-access-p9sr9\") pod \"ovnkube-node-rw6c6\" (UID: \"3be4b8e5-397a-48bb-8d14-bc3609797b2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.514400 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.877968 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovnkube-controller/3.log" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.882302 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovn-acl-logging/0.log" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.883389 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4wml_59482207-ba7e-4b71-a40b-968d8e3dcb8b/ovn-controller/0.log" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884004 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc" exitCode=0 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884049 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5" exitCode=0 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884084 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a" exitCode=0 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884099 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4" exitCode=0 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884112 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df" exitCode=0 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884125 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766" exitCode=0 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884137 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54" exitCode=143 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884150 4747 generic.go:334] "Generic (PLEG): container finished" podID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" containerID="9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482" exitCode=143 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884202 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884365 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884411 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884433 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884454 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884472 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884491 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884509 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884525 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884537 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884548 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884558 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884569 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884579 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884589 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884600 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884614 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884676 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884693 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884705 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884731 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884742 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884753 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884763 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884773 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884784 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884794 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884810 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884827 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884839 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884851 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884861 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884872 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884882 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884892 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884903 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884916 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884929 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884949 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4wml" event={"ID":"59482207-ba7e-4b71-a40b-968d8e3dcb8b","Type":"ContainerDied","Data":"f5edc436c802175ca9a07986cfa7354a14a57e55f5eca04d428de004a0b70ba7"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884971 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.884987 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.885002 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.885018 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.885028 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.885040 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.885050 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.885095 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.885106 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.885116 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.885140 4747 scope.go:117] "RemoveContainer" containerID="581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.891285 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/2.log" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.891960 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/1.log" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.892032 4747 generic.go:334] "Generic (PLEG): container finished" podID="aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1" containerID="c225843c709935a5fa59c02609d44f595f192a71576db8fbbce3fb388e1f2d39" exitCode=2 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.892160 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lb7jc" event={"ID":"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1","Type":"ContainerDied","Data":"c225843c709935a5fa59c02609d44f595f192a71576db8fbbce3fb388e1f2d39"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.892202 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.892762 4747 scope.go:117] "RemoveContainer" containerID="c225843c709935a5fa59c02609d44f595f192a71576db8fbbce3fb388e1f2d39" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.894100 4747 generic.go:334] "Generic (PLEG): container finished" podID="3be4b8e5-397a-48bb-8d14-bc3609797b2b" containerID="2563b782a5fb1fb211be1e50c1a4a3744498c3388197868800471825c5682dca" exitCode=0 Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.894133 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" event={"ID":"3be4b8e5-397a-48bb-8d14-bc3609797b2b","Type":"ContainerDied","Data":"2563b782a5fb1fb211be1e50c1a4a3744498c3388197868800471825c5682dca"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.894157 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" event={"ID":"3be4b8e5-397a-48bb-8d14-bc3609797b2b","Type":"ContainerStarted","Data":"b1ed3ba7c122118a205770023650642523489e3c3aa26bc42df6794373e74a23"} Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.917249 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m4wml"] Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.917421 4747 scope.go:117] "RemoveContainer" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.920930 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m4wml"] Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.943116 4747 scope.go:117] "RemoveContainer" containerID="222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5" Nov 26 13:29:31 crc kubenswrapper[4747]: I1126 13:29:31.985975 4747 scope.go:117] "RemoveContainer" containerID="d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.023845 4747 scope.go:117] "RemoveContainer" containerID="6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.051295 4747 scope.go:117] "RemoveContainer" containerID="e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.066729 4747 scope.go:117] "RemoveContainer" containerID="bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.079947 4747 scope.go:117] "RemoveContainer" containerID="90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.098376 4747 scope.go:117] "RemoveContainer" containerID="9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.123712 4747 scope.go:117] "RemoveContainer" containerID="cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.155026 4747 scope.go:117] "RemoveContainer" containerID="581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc" Nov 26 13:29:32 crc kubenswrapper[4747]: E1126 13:29:32.155574 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": container with ID starting with 581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc not found: ID does not exist" containerID="581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.155609 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc"} err="failed to get container status \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": rpc error: code = NotFound desc = could not find container \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": container with ID starting with 581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.155633 4747 scope.go:117] "RemoveContainer" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:29:32 crc kubenswrapper[4747]: E1126 13:29:32.156003 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\": container with ID starting with 4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75 not found: ID does not exist" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.156028 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75"} err="failed to get container status \"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\": rpc error: code = NotFound desc = could not find container \"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\": container with ID starting with 4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.156048 4747 scope.go:117] "RemoveContainer" containerID="222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5" Nov 26 13:29:32 crc kubenswrapper[4747]: E1126 13:29:32.156328 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\": container with ID starting with 222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5 not found: ID does not exist" containerID="222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.156355 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5"} err="failed to get container status \"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\": rpc error: code = NotFound desc = could not find container \"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\": container with ID starting with 222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.156373 4747 scope.go:117] "RemoveContainer" containerID="d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a" Nov 26 13:29:32 crc kubenswrapper[4747]: E1126 13:29:32.156650 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\": container with ID starting with d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a not found: ID does not exist" containerID="d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.156676 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a"} err="failed to get container status \"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\": rpc error: code = NotFound desc = could not find container \"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\": container with ID starting with d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.156691 4747 scope.go:117] "RemoveContainer" containerID="6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4" Nov 26 13:29:32 crc kubenswrapper[4747]: E1126 13:29:32.156944 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\": container with ID starting with 6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4 not found: ID does not exist" containerID="6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.156966 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4"} err="failed to get container status \"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\": rpc error: code = NotFound desc = could not find container \"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\": container with ID starting with 6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.156984 4747 scope.go:117] "RemoveContainer" containerID="e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df" Nov 26 13:29:32 crc kubenswrapper[4747]: E1126 13:29:32.157318 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\": container with ID starting with e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df not found: ID does not exist" containerID="e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.157343 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df"} err="failed to get container status \"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\": rpc error: code = NotFound desc = could not find container \"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\": container with ID starting with e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.157381 4747 scope.go:117] "RemoveContainer" containerID="bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766" Nov 26 13:29:32 crc kubenswrapper[4747]: E1126 13:29:32.157608 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\": container with ID starting with bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766 not found: ID does not exist" containerID="bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.157640 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766"} err="failed to get container status \"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\": rpc error: code = NotFound desc = could not find container \"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\": container with ID starting with bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.157662 4747 scope.go:117] "RemoveContainer" containerID="90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54" Nov 26 13:29:32 crc kubenswrapper[4747]: E1126 13:29:32.157847 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\": container with ID starting with 90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54 not found: ID does not exist" containerID="90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.157870 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54"} err="failed to get container status \"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\": rpc error: code = NotFound desc = could not find container \"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\": container with ID starting with 90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.157886 4747 scope.go:117] "RemoveContainer" containerID="9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482" Nov 26 13:29:32 crc kubenswrapper[4747]: E1126 13:29:32.158081 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\": container with ID starting with 9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482 not found: ID does not exist" containerID="9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.158109 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482"} err="failed to get container status \"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\": rpc error: code = NotFound desc = could not find container \"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\": container with ID starting with 9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.158126 4747 scope.go:117] "RemoveContainer" containerID="cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12" Nov 26 13:29:32 crc kubenswrapper[4747]: E1126 13:29:32.158291 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\": container with ID starting with cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12 not found: ID does not exist" containerID="cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.158316 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12"} err="failed to get container status \"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\": rpc error: code = NotFound desc = could not find container \"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\": container with ID starting with cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.158331 4747 scope.go:117] "RemoveContainer" containerID="581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.158563 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc"} err="failed to get container status \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": rpc error: code = NotFound desc = could not find container \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": container with ID starting with 581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.158592 4747 scope.go:117] "RemoveContainer" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.158837 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75"} err="failed to get container status \"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\": rpc error: code = NotFound desc = could not find container \"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\": container with ID starting with 4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.158861 4747 scope.go:117] "RemoveContainer" containerID="222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.159131 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5"} err="failed to get container status \"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\": rpc error: code = NotFound desc = could not find container \"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\": container with ID starting with 222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.159155 4747 scope.go:117] "RemoveContainer" containerID="d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.159316 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a"} err="failed to get container status \"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\": rpc error: code = NotFound desc = could not find container \"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\": container with ID starting with d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.159336 4747 scope.go:117] "RemoveContainer" containerID="6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.159544 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4"} err="failed to get container status \"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\": rpc error: code = NotFound desc = could not find container \"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\": container with ID starting with 6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.159566 4747 scope.go:117] "RemoveContainer" containerID="e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.159779 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df"} err="failed to get container status \"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\": rpc error: code = NotFound desc = could not find container \"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\": container with ID starting with e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.159800 4747 scope.go:117] "RemoveContainer" containerID="bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.160074 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766"} err="failed to get container status \"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\": rpc error: code = NotFound desc = could not find container \"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\": container with ID starting with bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.160103 4747 scope.go:117] "RemoveContainer" containerID="90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.160376 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54"} err="failed to get container status \"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\": rpc error: code = NotFound desc = could not find container \"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\": container with ID starting with 90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.160407 4747 scope.go:117] "RemoveContainer" containerID="9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.160654 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482"} err="failed to get container status \"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\": rpc error: code = NotFound desc = could not find container \"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\": container with ID starting with 9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.160683 4747 scope.go:117] "RemoveContainer" containerID="cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.160932 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12"} err="failed to get container status \"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\": rpc error: code = NotFound desc = could not find container \"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\": container with ID starting with cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.160959 4747 scope.go:117] "RemoveContainer" containerID="581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.161214 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc"} err="failed to get container status \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": rpc error: code = NotFound desc = could not find container \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": container with ID starting with 581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.161236 4747 scope.go:117] "RemoveContainer" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.161460 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75"} err="failed to get container status \"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\": rpc error: code = NotFound desc = could not find container \"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\": container with ID starting with 4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.161481 4747 scope.go:117] "RemoveContainer" containerID="222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.161799 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5"} err="failed to get container status \"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\": rpc error: code = NotFound desc = could not find container \"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\": container with ID starting with 222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.161821 4747 scope.go:117] "RemoveContainer" containerID="d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.162031 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a"} err="failed to get container status \"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\": rpc error: code = NotFound desc = could not find container \"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\": container with ID starting with d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.162079 4747 scope.go:117] "RemoveContainer" containerID="6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.162288 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4"} err="failed to get container status \"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\": rpc error: code = NotFound desc = could not find container \"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\": container with ID starting with 6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.162311 4747 scope.go:117] "RemoveContainer" containerID="e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.162627 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df"} err="failed to get container status \"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\": rpc error: code = NotFound desc = could not find container \"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\": container with ID starting with e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.162647 4747 scope.go:117] "RemoveContainer" containerID="bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.162879 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766"} err="failed to get container status \"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\": rpc error: code = NotFound desc = could not find container \"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\": container with ID starting with bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.162901 4747 scope.go:117] "RemoveContainer" containerID="90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.163148 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54"} err="failed to get container status \"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\": rpc error: code = NotFound desc = could not find container \"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\": container with ID starting with 90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.163169 4747 scope.go:117] "RemoveContainer" containerID="9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.163458 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482"} err="failed to get container status \"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\": rpc error: code = NotFound desc = could not find container \"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\": container with ID starting with 9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.163511 4747 scope.go:117] "RemoveContainer" containerID="cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.163823 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12"} err="failed to get container status \"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\": rpc error: code = NotFound desc = could not find container \"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\": container with ID starting with cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.163845 4747 scope.go:117] "RemoveContainer" containerID="581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.164112 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc"} err="failed to get container status \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": rpc error: code = NotFound desc = could not find container \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": container with ID starting with 581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.164137 4747 scope.go:117] "RemoveContainer" containerID="4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.164399 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75"} err="failed to get container status \"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\": rpc error: code = NotFound desc = could not find container \"4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75\": container with ID starting with 4c7abe70f34fb143da3eb5998c3a0b3f1cc4703e910bc1c665566a6d58e1ee75 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.164423 4747 scope.go:117] "RemoveContainer" containerID="222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.164699 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5"} err="failed to get container status \"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\": rpc error: code = NotFound desc = could not find container \"222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5\": container with ID starting with 222e9189fc8941a5614b20e659e8f6590d64106cd21e9edf283742d63652d6e5 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.164728 4747 scope.go:117] "RemoveContainer" containerID="d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.164942 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a"} err="failed to get container status \"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\": rpc error: code = NotFound desc = could not find container \"d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a\": container with ID starting with d06710ab8502394301290a8fc8a1230b71f33b9ab4caf4e75c9248119e24923a not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.164965 4747 scope.go:117] "RemoveContainer" containerID="6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.165193 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4"} err="failed to get container status \"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\": rpc error: code = NotFound desc = could not find container \"6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4\": container with ID starting with 6e630015ffb9ea7b5c50cf9a7cb166531f378734116801e4338bf61374d45aa4 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.165215 4747 scope.go:117] "RemoveContainer" containerID="e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.165412 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df"} err="failed to get container status \"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\": rpc error: code = NotFound desc = could not find container \"e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df\": container with ID starting with e1ce72fcf65806d29dbca89301c9b3aa829781038dd856480d273e92674dc0df not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.165433 4747 scope.go:117] "RemoveContainer" containerID="bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.165655 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766"} err="failed to get container status \"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\": rpc error: code = NotFound desc = could not find container \"bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766\": container with ID starting with bbf8552f3186d5f36f3cc4e85cbd934de1e45d89061aeec1d9ae17664f0a2766 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.165676 4747 scope.go:117] "RemoveContainer" containerID="90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.165898 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54"} err="failed to get container status \"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\": rpc error: code = NotFound desc = could not find container \"90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54\": container with ID starting with 90de340356774bfd84c1206145e2badd35a9ccaf7643ee2c93fa9ac6a079fb54 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.165919 4747 scope.go:117] "RemoveContainer" containerID="9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.166139 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482"} err="failed to get container status \"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\": rpc error: code = NotFound desc = could not find container \"9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482\": container with ID starting with 9cb75dc469360fb6e8c248097d1c5460abf441d6b32d31a9dd92b895d77a1482 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.166161 4747 scope.go:117] "RemoveContainer" containerID="cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.166603 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12"} err="failed to get container status \"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\": rpc error: code = NotFound desc = could not find container \"cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12\": container with ID starting with cc6865269aaf5ecd64a137f0145cb05b0e335dfc4d0e17ae9b6e41aa74369b12 not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.166629 4747 scope.go:117] "RemoveContainer" containerID="581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.166848 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc"} err="failed to get container status \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": rpc error: code = NotFound desc = could not find container \"581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc\": container with ID starting with 581aab40f39b3e64db55c5396554dcb4b74f9a18e3424d751229b23aaf16f7fc not found: ID does not exist" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.913715 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/2.log" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.915328 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/1.log" Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.915406 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lb7jc" event={"ID":"aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1","Type":"ContainerStarted","Data":"e253bf7959a9acb5e2780323b82595886fd8fc69261082070d25edeb824269a5"} Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.920395 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" event={"ID":"3be4b8e5-397a-48bb-8d14-bc3609797b2b","Type":"ContainerStarted","Data":"28d6f379a0fcad91ecde1e9d2f22b65a2489b3776bc04cc2fd5e9aa05e4ebd33"} Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.920461 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" event={"ID":"3be4b8e5-397a-48bb-8d14-bc3609797b2b","Type":"ContainerStarted","Data":"4804dde256419eb03c23b757b4e0715b169d6f36a53627e5e8c3ce13bd91a8d7"} Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.920476 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" event={"ID":"3be4b8e5-397a-48bb-8d14-bc3609797b2b","Type":"ContainerStarted","Data":"dfd8bb68811b1f13e50f410152393cdbd4e1931ba9fe2b085b7cb24080d131bd"} Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.920511 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" event={"ID":"3be4b8e5-397a-48bb-8d14-bc3609797b2b","Type":"ContainerStarted","Data":"c575f00bf361b4d80158d262868a86a691ecec3ed13b46f70f1c76d0673da2c7"} Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.920528 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" event={"ID":"3be4b8e5-397a-48bb-8d14-bc3609797b2b","Type":"ContainerStarted","Data":"fc9ccf498092a21803bcbd295679d4e556f58a0370af3178a7b165ab1ab62cca"} Nov 26 13:29:32 crc kubenswrapper[4747]: I1126 13:29:32.920543 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" event={"ID":"3be4b8e5-397a-48bb-8d14-bc3609797b2b","Type":"ContainerStarted","Data":"eaeb0393e5a195244bc732e37177c4b9176e74bdb1ae3e9b0fc2d6e3512ed0f2"} Nov 26 13:29:33 crc kubenswrapper[4747]: I1126 13:29:33.806730 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59482207-ba7e-4b71-a40b-968d8e3dcb8b" path="/var/lib/kubelet/pods/59482207-ba7e-4b71-a40b-968d8e3dcb8b/volumes" Nov 26 13:29:34 crc kubenswrapper[4747]: I1126 13:29:34.939799 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" event={"ID":"3be4b8e5-397a-48bb-8d14-bc3609797b2b","Type":"ContainerStarted","Data":"c1edfb0ef324df89a332214d4584d39d6f93845f74bf70a659410fc92af0d1fd"} Nov 26 13:29:37 crc kubenswrapper[4747]: I1126 13:29:37.963306 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" event={"ID":"3be4b8e5-397a-48bb-8d14-bc3609797b2b","Type":"ContainerStarted","Data":"4dd26713934418397bb3160fb63b9eae23c2403c7fd053fea9813afe3fd33162"} Nov 26 13:29:37 crc kubenswrapper[4747]: I1126 13:29:37.965613 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:37 crc kubenswrapper[4747]: I1126 13:29:37.995622 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:38 crc kubenswrapper[4747]: I1126 13:29:38.015393 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" podStartSLOduration=7.015029441 podStartE2EDuration="7.015029441s" podCreationTimestamp="2025-11-26 13:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:29:38.00461213 +0000 UTC m=+864.990923235" watchObservedRunningTime="2025-11-26 13:29:38.015029441 +0000 UTC m=+865.001340496" Nov 26 13:29:38 crc kubenswrapper[4747]: I1126 13:29:38.968378 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:38 crc kubenswrapper[4747]: I1126 13:29:38.968872 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:29:39 crc kubenswrapper[4747]: I1126 13:29:39.030628 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.191754 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t"] Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.193010 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.195550 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.199341 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.200683 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t"] Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.302192 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s8zn\" (UniqueName: \"kubernetes.io/projected/a12a8573-36fe-4a18-9a53-08b0c5de22aa-kube-api-access-8s8zn\") pod \"collect-profiles-29402730-n9m6t\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.302249 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a12a8573-36fe-4a18-9a53-08b0c5de22aa-config-volume\") pod \"collect-profiles-29402730-n9m6t\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.302339 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a12a8573-36fe-4a18-9a53-08b0c5de22aa-secret-volume\") pod \"collect-profiles-29402730-n9m6t\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.403463 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s8zn\" (UniqueName: \"kubernetes.io/projected/a12a8573-36fe-4a18-9a53-08b0c5de22aa-kube-api-access-8s8zn\") pod \"collect-profiles-29402730-n9m6t\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.403531 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a12a8573-36fe-4a18-9a53-08b0c5de22aa-config-volume\") pod \"collect-profiles-29402730-n9m6t\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.403611 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a12a8573-36fe-4a18-9a53-08b0c5de22aa-secret-volume\") pod \"collect-profiles-29402730-n9m6t\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.404643 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a12a8573-36fe-4a18-9a53-08b0c5de22aa-config-volume\") pod \"collect-profiles-29402730-n9m6t\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.416215 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a12a8573-36fe-4a18-9a53-08b0c5de22aa-secret-volume\") pod \"collect-profiles-29402730-n9m6t\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.419046 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s8zn\" (UniqueName: \"kubernetes.io/projected/a12a8573-36fe-4a18-9a53-08b0c5de22aa-kube-api-access-8s8zn\") pod \"collect-profiles-29402730-n9m6t\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.512445 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:00 crc kubenswrapper[4747]: I1126 13:30:00.694298 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t"] Nov 26 13:30:01 crc kubenswrapper[4747]: I1126 13:30:01.117044 4747 generic.go:334] "Generic (PLEG): container finished" podID="a12a8573-36fe-4a18-9a53-08b0c5de22aa" containerID="2b6ac77a8f22e7ec02e273f31929555809b954a6bbbc29429aa1e2874b866851" exitCode=0 Nov 26 13:30:01 crc kubenswrapper[4747]: I1126 13:30:01.117158 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" event={"ID":"a12a8573-36fe-4a18-9a53-08b0c5de22aa","Type":"ContainerDied","Data":"2b6ac77a8f22e7ec02e273f31929555809b954a6bbbc29429aa1e2874b866851"} Nov 26 13:30:01 crc kubenswrapper[4747]: I1126 13:30:01.117371 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" event={"ID":"a12a8573-36fe-4a18-9a53-08b0c5de22aa","Type":"ContainerStarted","Data":"b21e9380900c648a8561783e2fc5405ce83c1183e1df67048ec31402d274b0f5"} Nov 26 13:30:01 crc kubenswrapper[4747]: I1126 13:30:01.537489 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rw6c6" Nov 26 13:30:02 crc kubenswrapper[4747]: I1126 13:30:02.475736 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:02 crc kubenswrapper[4747]: I1126 13:30:02.629004 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s8zn\" (UniqueName: \"kubernetes.io/projected/a12a8573-36fe-4a18-9a53-08b0c5de22aa-kube-api-access-8s8zn\") pod \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " Nov 26 13:30:02 crc kubenswrapper[4747]: I1126 13:30:02.629873 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a12a8573-36fe-4a18-9a53-08b0c5de22aa-secret-volume\") pod \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " Nov 26 13:30:02 crc kubenswrapper[4747]: I1126 13:30:02.629975 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a12a8573-36fe-4a18-9a53-08b0c5de22aa-config-volume\") pod \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\" (UID: \"a12a8573-36fe-4a18-9a53-08b0c5de22aa\") " Nov 26 13:30:02 crc kubenswrapper[4747]: I1126 13:30:02.630464 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12a8573-36fe-4a18-9a53-08b0c5de22aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "a12a8573-36fe-4a18-9a53-08b0c5de22aa" (UID: "a12a8573-36fe-4a18-9a53-08b0c5de22aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:30:02 crc kubenswrapper[4747]: I1126 13:30:02.634182 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a12a8573-36fe-4a18-9a53-08b0c5de22aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a12a8573-36fe-4a18-9a53-08b0c5de22aa" (UID: "a12a8573-36fe-4a18-9a53-08b0c5de22aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:30:02 crc kubenswrapper[4747]: I1126 13:30:02.634321 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12a8573-36fe-4a18-9a53-08b0c5de22aa-kube-api-access-8s8zn" (OuterVolumeSpecName: "kube-api-access-8s8zn") pod "a12a8573-36fe-4a18-9a53-08b0c5de22aa" (UID: "a12a8573-36fe-4a18-9a53-08b0c5de22aa"). InnerVolumeSpecName "kube-api-access-8s8zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:30:02 crc kubenswrapper[4747]: I1126 13:30:02.731535 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s8zn\" (UniqueName: \"kubernetes.io/projected/a12a8573-36fe-4a18-9a53-08b0c5de22aa-kube-api-access-8s8zn\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:02 crc kubenswrapper[4747]: I1126 13:30:02.731565 4747 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a12a8573-36fe-4a18-9a53-08b0c5de22aa-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:02 crc kubenswrapper[4747]: I1126 13:30:02.731575 4747 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a12a8573-36fe-4a18-9a53-08b0c5de22aa-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:03 crc kubenswrapper[4747]: I1126 13:30:03.134373 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" event={"ID":"a12a8573-36fe-4a18-9a53-08b0c5de22aa","Type":"ContainerDied","Data":"b21e9380900c648a8561783e2fc5405ce83c1183e1df67048ec31402d274b0f5"} Nov 26 13:30:03 crc kubenswrapper[4747]: I1126 13:30:03.134695 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21e9380900c648a8561783e2fc5405ce83c1183e1df67048ec31402d274b0f5" Nov 26 13:30:03 crc kubenswrapper[4747]: I1126 13:30:03.134465 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-n9m6t" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.420578 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb"] Nov 26 13:30:05 crc kubenswrapper[4747]: E1126 13:30:05.421352 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12a8573-36fe-4a18-9a53-08b0c5de22aa" containerName="collect-profiles" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.421410 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12a8573-36fe-4a18-9a53-08b0c5de22aa" containerName="collect-profiles" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.421614 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12a8573-36fe-4a18-9a53-08b0c5de22aa" containerName="collect-profiles" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.422724 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.427739 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.462009 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb"] Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.565668 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbmh\" (UniqueName: \"kubernetes.io/projected/6e988ef3-c6be-4380-853f-95039903e425-kube-api-access-sbbmh\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.565754 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.565854 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.667287 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.667385 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbmh\" (UniqueName: \"kubernetes.io/projected/6e988ef3-c6be-4380-853f-95039903e425-kube-api-access-sbbmh\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.667412 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.667945 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.668197 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.698844 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbmh\" (UniqueName: \"kubernetes.io/projected/6e988ef3-c6be-4380-853f-95039903e425-kube-api-access-sbbmh\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.759419 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:05 crc kubenswrapper[4747]: I1126 13:30:05.978492 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb"] Nov 26 13:30:06 crc kubenswrapper[4747]: I1126 13:30:06.150092 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" event={"ID":"6e988ef3-c6be-4380-853f-95039903e425","Type":"ContainerStarted","Data":"30d558825df721f870b32f2e33a54418e87aee5be941fa991cc98f4ac6aa8bf9"} Nov 26 13:30:06 crc kubenswrapper[4747]: I1126 13:30:06.150149 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" event={"ID":"6e988ef3-c6be-4380-853f-95039903e425","Type":"ContainerStarted","Data":"7ed0acc0b750846967f52a88fb86c122f1ba3329b0131a5ec2b35a65406d3b38"} Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.158212 4747 generic.go:334] "Generic (PLEG): container finished" podID="6e988ef3-c6be-4380-853f-95039903e425" containerID="30d558825df721f870b32f2e33a54418e87aee5be941fa991cc98f4ac6aa8bf9" exitCode=0 Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.158292 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" event={"ID":"6e988ef3-c6be-4380-853f-95039903e425","Type":"ContainerDied","Data":"30d558825df721f870b32f2e33a54418e87aee5be941fa991cc98f4ac6aa8bf9"} Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.160514 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.782388 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qdwsx"] Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.783634 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.813567 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qdwsx"] Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.897003 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-catalog-content\") pod \"redhat-operators-qdwsx\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.897137 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df54m\" (UniqueName: \"kubernetes.io/projected/92eae12b-5e32-4df6-a494-d7ad3a83779c-kube-api-access-df54m\") pod \"redhat-operators-qdwsx\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.897163 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-utilities\") pod \"redhat-operators-qdwsx\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.999145 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-catalog-content\") pod \"redhat-operators-qdwsx\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.999293 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df54m\" (UniqueName: \"kubernetes.io/projected/92eae12b-5e32-4df6-a494-d7ad3a83779c-kube-api-access-df54m\") pod \"redhat-operators-qdwsx\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.999327 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-utilities\") pod \"redhat-operators-qdwsx\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:07 crc kubenswrapper[4747]: I1126 13:30:07.999833 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-catalog-content\") pod \"redhat-operators-qdwsx\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:08 crc kubenswrapper[4747]: I1126 13:30:08.000035 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-utilities\") pod \"redhat-operators-qdwsx\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:08 crc kubenswrapper[4747]: I1126 13:30:08.025070 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df54m\" (UniqueName: \"kubernetes.io/projected/92eae12b-5e32-4df6-a494-d7ad3a83779c-kube-api-access-df54m\") pod \"redhat-operators-qdwsx\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:08 crc kubenswrapper[4747]: I1126 13:30:08.114550 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:08 crc kubenswrapper[4747]: I1126 13:30:08.338549 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qdwsx"] Nov 26 13:30:09 crc kubenswrapper[4747]: I1126 13:30:09.183033 4747 generic.go:334] "Generic (PLEG): container finished" podID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerID="64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a" exitCode=0 Nov 26 13:30:09 crc kubenswrapper[4747]: I1126 13:30:09.183690 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwsx" event={"ID":"92eae12b-5e32-4df6-a494-d7ad3a83779c","Type":"ContainerDied","Data":"64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a"} Nov 26 13:30:09 crc kubenswrapper[4747]: I1126 13:30:09.183746 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwsx" event={"ID":"92eae12b-5e32-4df6-a494-d7ad3a83779c","Type":"ContainerStarted","Data":"17ff8bfd26e8f471807f6237b6b6289a448f8742febe76eee1c65036b487de59"} Nov 26 13:30:09 crc kubenswrapper[4747]: I1126 13:30:09.187897 4747 generic.go:334] "Generic (PLEG): container finished" podID="6e988ef3-c6be-4380-853f-95039903e425" containerID="9a4e35493c34d1cc9c120a0fb4519289ed11a091e70b2aeeee935ed92ae418e1" exitCode=0 Nov 26 13:30:09 crc kubenswrapper[4747]: I1126 13:30:09.188123 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" event={"ID":"6e988ef3-c6be-4380-853f-95039903e425","Type":"ContainerDied","Data":"9a4e35493c34d1cc9c120a0fb4519289ed11a091e70b2aeeee935ed92ae418e1"} Nov 26 13:30:10 crc kubenswrapper[4747]: I1126 13:30:10.194714 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwsx" event={"ID":"92eae12b-5e32-4df6-a494-d7ad3a83779c","Type":"ContainerStarted","Data":"b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651"} Nov 26 13:30:10 crc kubenswrapper[4747]: I1126 13:30:10.197268 4747 generic.go:334] "Generic (PLEG): container finished" podID="6e988ef3-c6be-4380-853f-95039903e425" containerID="e7778fb2ad58ecfa97a6f3f61c92202c49dbd456785e3736d1a2f167240d678f" exitCode=0 Nov 26 13:30:10 crc kubenswrapper[4747]: I1126 13:30:10.197305 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" event={"ID":"6e988ef3-c6be-4380-853f-95039903e425","Type":"ContainerDied","Data":"e7778fb2ad58ecfa97a6f3f61c92202c49dbd456785e3736d1a2f167240d678f"} Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.204000 4747 generic.go:334] "Generic (PLEG): container finished" podID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerID="b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651" exitCode=0 Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.204200 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwsx" event={"ID":"92eae12b-5e32-4df6-a494-d7ad3a83779c","Type":"ContainerDied","Data":"b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651"} Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.467690 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.554099 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbbmh\" (UniqueName: \"kubernetes.io/projected/6e988ef3-c6be-4380-853f-95039903e425-kube-api-access-sbbmh\") pod \"6e988ef3-c6be-4380-853f-95039903e425\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.554197 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-bundle\") pod \"6e988ef3-c6be-4380-853f-95039903e425\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.554242 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-util\") pod \"6e988ef3-c6be-4380-853f-95039903e425\" (UID: \"6e988ef3-c6be-4380-853f-95039903e425\") " Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.556448 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-bundle" (OuterVolumeSpecName: "bundle") pod "6e988ef3-c6be-4380-853f-95039903e425" (UID: "6e988ef3-c6be-4380-853f-95039903e425"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.563230 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e988ef3-c6be-4380-853f-95039903e425-kube-api-access-sbbmh" (OuterVolumeSpecName: "kube-api-access-sbbmh") pod "6e988ef3-c6be-4380-853f-95039903e425" (UID: "6e988ef3-c6be-4380-853f-95039903e425"). InnerVolumeSpecName "kube-api-access-sbbmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.655567 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.655601 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbbmh\" (UniqueName: \"kubernetes.io/projected/6e988ef3-c6be-4380-853f-95039903e425-kube-api-access-sbbmh\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.909069 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-util" (OuterVolumeSpecName: "util") pod "6e988ef3-c6be-4380-853f-95039903e425" (UID: "6e988ef3-c6be-4380-853f-95039903e425"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:30:11 crc kubenswrapper[4747]: I1126 13:30:11.960459 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e988ef3-c6be-4380-853f-95039903e425-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:12 crc kubenswrapper[4747]: I1126 13:30:12.212416 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" event={"ID":"6e988ef3-c6be-4380-853f-95039903e425","Type":"ContainerDied","Data":"7ed0acc0b750846967f52a88fb86c122f1ba3329b0131a5ec2b35a65406d3b38"} Nov 26 13:30:12 crc kubenswrapper[4747]: I1126 13:30:12.212752 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ed0acc0b750846967f52a88fb86c122f1ba3329b0131a5ec2b35a65406d3b38" Nov 26 13:30:12 crc kubenswrapper[4747]: I1126 13:30:12.212440 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb" Nov 26 13:30:12 crc kubenswrapper[4747]: I1126 13:30:12.214897 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwsx" event={"ID":"92eae12b-5e32-4df6-a494-d7ad3a83779c","Type":"ContainerStarted","Data":"a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66"} Nov 26 13:30:12 crc kubenswrapper[4747]: I1126 13:30:12.240737 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qdwsx" podStartSLOduration=2.4945112050000002 podStartE2EDuration="5.240714766s" podCreationTimestamp="2025-11-26 13:30:07 +0000 UTC" firstStartedPulling="2025-11-26 13:30:09.187200644 +0000 UTC m=+896.173511699" lastFinishedPulling="2025-11-26 13:30:11.933404225 +0000 UTC m=+898.919715260" observedRunningTime="2025-11-26 13:30:12.235945257 +0000 UTC m=+899.222256272" watchObservedRunningTime="2025-11-26 13:30:12.240714766 +0000 UTC m=+899.227025801" Nov 26 13:30:14 crc kubenswrapper[4747]: I1126 13:30:14.189764 4747 scope.go:117] "RemoveContainer" containerID="a6f133a97a4b5744b5a60a4e63dbfff27e6ac646e388cf86622b6480a09bca1e" Nov 26 13:30:15 crc kubenswrapper[4747]: I1126 13:30:15.233964 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lb7jc_aa6ddbde-eada-4a3c-bbf8-eae2cd30ccc1/kube-multus/2.log" Nov 26 13:30:18 crc kubenswrapper[4747]: I1126 13:30:18.115403 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:18 crc kubenswrapper[4747]: I1126 13:30:18.115779 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:18 crc kubenswrapper[4747]: I1126 13:30:18.170973 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:18 crc kubenswrapper[4747]: I1126 13:30:18.287089 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:19 crc kubenswrapper[4747]: I1126 13:30:19.970043 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qdwsx"] Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.258474 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qdwsx" podUID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerName="registry-server" containerID="cri-o://a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66" gracePeriod=2 Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.652251 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.793983 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-catalog-content\") pod \"92eae12b-5e32-4df6-a494-d7ad3a83779c\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.794156 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-utilities\") pod \"92eae12b-5e32-4df6-a494-d7ad3a83779c\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.794186 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df54m\" (UniqueName: \"kubernetes.io/projected/92eae12b-5e32-4df6-a494-d7ad3a83779c-kube-api-access-df54m\") pod \"92eae12b-5e32-4df6-a494-d7ad3a83779c\" (UID: \"92eae12b-5e32-4df6-a494-d7ad3a83779c\") " Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.795650 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-utilities" (OuterVolumeSpecName: "utilities") pod "92eae12b-5e32-4df6-a494-d7ad3a83779c" (UID: "92eae12b-5e32-4df6-a494-d7ad3a83779c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.802713 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92eae12b-5e32-4df6-a494-d7ad3a83779c-kube-api-access-df54m" (OuterVolumeSpecName: "kube-api-access-df54m") pod "92eae12b-5e32-4df6-a494-d7ad3a83779c" (UID: "92eae12b-5e32-4df6-a494-d7ad3a83779c"). InnerVolumeSpecName "kube-api-access-df54m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.895718 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.895760 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df54m\" (UniqueName: \"kubernetes.io/projected/92eae12b-5e32-4df6-a494-d7ad3a83779c-kube-api-access-df54m\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.917653 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92eae12b-5e32-4df6-a494-d7ad3a83779c" (UID: "92eae12b-5e32-4df6-a494-d7ad3a83779c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:30:20 crc kubenswrapper[4747]: I1126 13:30:20.996595 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92eae12b-5e32-4df6-a494-d7ad3a83779c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.265711 4747 generic.go:334] "Generic (PLEG): container finished" podID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerID="a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66" exitCode=0 Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.265758 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwsx" event={"ID":"92eae12b-5e32-4df6-a494-d7ad3a83779c","Type":"ContainerDied","Data":"a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66"} Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.265777 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwsx" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.265795 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwsx" event={"ID":"92eae12b-5e32-4df6-a494-d7ad3a83779c","Type":"ContainerDied","Data":"17ff8bfd26e8f471807f6237b6b6289a448f8742febe76eee1c65036b487de59"} Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.265836 4747 scope.go:117] "RemoveContainer" containerID="a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.291210 4747 scope.go:117] "RemoveContainer" containerID="b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.308478 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qdwsx"] Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.312970 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qdwsx"] Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.332679 4747 scope.go:117] "RemoveContainer" containerID="64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.346040 4747 scope.go:117] "RemoveContainer" containerID="a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66" Nov 26 13:30:21 crc kubenswrapper[4747]: E1126 13:30:21.346324 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66\": container with ID starting with a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66 not found: ID does not exist" containerID="a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.346363 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66"} err="failed to get container status \"a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66\": rpc error: code = NotFound desc = could not find container \"a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66\": container with ID starting with a702fb10008c0777e0b5c11c6294f01d22ac9e069688dc9124b9900cad452c66 not found: ID does not exist" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.346391 4747 scope.go:117] "RemoveContainer" containerID="b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651" Nov 26 13:30:21 crc kubenswrapper[4747]: E1126 13:30:21.346715 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651\": container with ID starting with b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651 not found: ID does not exist" containerID="b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.346735 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651"} err="failed to get container status \"b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651\": rpc error: code = NotFound desc = could not find container \"b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651\": container with ID starting with b7e466d3f58d363b971d42be20691d251557b5634decd3989c415df39a880651 not found: ID does not exist" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.346750 4747 scope.go:117] "RemoveContainer" containerID="64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a" Nov 26 13:30:21 crc kubenswrapper[4747]: E1126 13:30:21.346951 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a\": container with ID starting with 64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a not found: ID does not exist" containerID="64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.346980 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a"} err="failed to get container status \"64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a\": rpc error: code = NotFound desc = could not find container \"64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a\": container with ID starting with 64e5dad2ee7e082bb7d5a9792e8425af09ae5792c8704516506da622321f381a not found: ID does not exist" Nov 26 13:30:21 crc kubenswrapper[4747]: I1126 13:30:21.807424 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92eae12b-5e32-4df6-a494-d7ad3a83779c" path="/var/lib/kubelet/pods/92eae12b-5e32-4df6-a494-d7ad3a83779c/volumes" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.874688 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2"] Nov 26 13:30:22 crc kubenswrapper[4747]: E1126 13:30:22.875134 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e988ef3-c6be-4380-853f-95039903e425" containerName="util" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.875145 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e988ef3-c6be-4380-853f-95039903e425" containerName="util" Nov 26 13:30:22 crc kubenswrapper[4747]: E1126 13:30:22.875157 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerName="extract-content" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.875164 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerName="extract-content" Nov 26 13:30:22 crc kubenswrapper[4747]: E1126 13:30:22.875172 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e988ef3-c6be-4380-853f-95039903e425" containerName="pull" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.875177 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e988ef3-c6be-4380-853f-95039903e425" containerName="pull" Nov 26 13:30:22 crc kubenswrapper[4747]: E1126 13:30:22.875186 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerName="registry-server" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.875192 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerName="registry-server" Nov 26 13:30:22 crc kubenswrapper[4747]: E1126 13:30:22.875202 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerName="extract-utilities" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.875209 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerName="extract-utilities" Nov 26 13:30:22 crc kubenswrapper[4747]: E1126 13:30:22.875220 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e988ef3-c6be-4380-853f-95039903e425" containerName="extract" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.875225 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e988ef3-c6be-4380-853f-95039903e425" containerName="extract" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.875327 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="92eae12b-5e32-4df6-a494-d7ad3a83779c" containerName="registry-server" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.875342 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e988ef3-c6be-4380-853f-95039903e425" containerName="extract" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.875738 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.877892 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.877908 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.878390 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.878431 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-54ghl" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.879963 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 26 13:30:22 crc kubenswrapper[4747]: I1126 13:30:22.892446 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2"] Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.023105 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfphz\" (UniqueName: \"kubernetes.io/projected/78093636-f156-4295-918f-8aa7278c3f69-kube-api-access-jfphz\") pod \"metallb-operator-controller-manager-6db6754d4-6csp2\" (UID: \"78093636-f156-4295-918f-8aa7278c3f69\") " pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.023270 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78093636-f156-4295-918f-8aa7278c3f69-webhook-cert\") pod \"metallb-operator-controller-manager-6db6754d4-6csp2\" (UID: \"78093636-f156-4295-918f-8aa7278c3f69\") " pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.023374 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78093636-f156-4295-918f-8aa7278c3f69-apiservice-cert\") pod \"metallb-operator-controller-manager-6db6754d4-6csp2\" (UID: \"78093636-f156-4295-918f-8aa7278c3f69\") " pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.108825 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc"] Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.109426 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.111817 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.111842 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 26 13:30:23 crc kubenswrapper[4747]: W1126 13:30:23.111843 4747 reflector.go:561] object-"metallb-system"/"controller-dockercfg-t5qg9": failed to list *v1.Secret: secrets "controller-dockercfg-t5qg9" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Nov 26 13:30:23 crc kubenswrapper[4747]: E1126 13:30:23.111906 4747 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-t5qg9\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"controller-dockercfg-t5qg9\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.121821 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc"] Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.124002 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfphz\" (UniqueName: \"kubernetes.io/projected/78093636-f156-4295-918f-8aa7278c3f69-kube-api-access-jfphz\") pod \"metallb-operator-controller-manager-6db6754d4-6csp2\" (UID: \"78093636-f156-4295-918f-8aa7278c3f69\") " pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.124091 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78093636-f156-4295-918f-8aa7278c3f69-webhook-cert\") pod \"metallb-operator-controller-manager-6db6754d4-6csp2\" (UID: \"78093636-f156-4295-918f-8aa7278c3f69\") " pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.124129 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78093636-f156-4295-918f-8aa7278c3f69-apiservice-cert\") pod \"metallb-operator-controller-manager-6db6754d4-6csp2\" (UID: \"78093636-f156-4295-918f-8aa7278c3f69\") " pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.128986 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78093636-f156-4295-918f-8aa7278c3f69-apiservice-cert\") pod \"metallb-operator-controller-manager-6db6754d4-6csp2\" (UID: \"78093636-f156-4295-918f-8aa7278c3f69\") " pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.130902 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78093636-f156-4295-918f-8aa7278c3f69-webhook-cert\") pod \"metallb-operator-controller-manager-6db6754d4-6csp2\" (UID: \"78093636-f156-4295-918f-8aa7278c3f69\") " pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.151584 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfphz\" (UniqueName: \"kubernetes.io/projected/78093636-f156-4295-918f-8aa7278c3f69-kube-api-access-jfphz\") pod \"metallb-operator-controller-manager-6db6754d4-6csp2\" (UID: \"78093636-f156-4295-918f-8aa7278c3f69\") " pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.189011 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.225557 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a251542-86df-4644-acf8-6dd3d58697ad-webhook-cert\") pod \"metallb-operator-webhook-server-768865bcf6-7wrrc\" (UID: \"1a251542-86df-4644-acf8-6dd3d58697ad\") " pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.225631 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a251542-86df-4644-acf8-6dd3d58697ad-apiservice-cert\") pod \"metallb-operator-webhook-server-768865bcf6-7wrrc\" (UID: \"1a251542-86df-4644-acf8-6dd3d58697ad\") " pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.225674 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sz86\" (UniqueName: \"kubernetes.io/projected/1a251542-86df-4644-acf8-6dd3d58697ad-kube-api-access-7sz86\") pod \"metallb-operator-webhook-server-768865bcf6-7wrrc\" (UID: \"1a251542-86df-4644-acf8-6dd3d58697ad\") " pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.326647 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a251542-86df-4644-acf8-6dd3d58697ad-webhook-cert\") pod \"metallb-operator-webhook-server-768865bcf6-7wrrc\" (UID: \"1a251542-86df-4644-acf8-6dd3d58697ad\") " pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.327070 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a251542-86df-4644-acf8-6dd3d58697ad-apiservice-cert\") pod \"metallb-operator-webhook-server-768865bcf6-7wrrc\" (UID: \"1a251542-86df-4644-acf8-6dd3d58697ad\") " pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.327141 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sz86\" (UniqueName: \"kubernetes.io/projected/1a251542-86df-4644-acf8-6dd3d58697ad-kube-api-access-7sz86\") pod \"metallb-operator-webhook-server-768865bcf6-7wrrc\" (UID: \"1a251542-86df-4644-acf8-6dd3d58697ad\") " pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.333987 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a251542-86df-4644-acf8-6dd3d58697ad-apiservice-cert\") pod \"metallb-operator-webhook-server-768865bcf6-7wrrc\" (UID: \"1a251542-86df-4644-acf8-6dd3d58697ad\") " pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.333990 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a251542-86df-4644-acf8-6dd3d58697ad-webhook-cert\") pod \"metallb-operator-webhook-server-768865bcf6-7wrrc\" (UID: \"1a251542-86df-4644-acf8-6dd3d58697ad\") " pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.348079 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sz86\" (UniqueName: \"kubernetes.io/projected/1a251542-86df-4644-acf8-6dd3d58697ad-kube-api-access-7sz86\") pod \"metallb-operator-webhook-server-768865bcf6-7wrrc\" (UID: \"1a251542-86df-4644-acf8-6dd3d58697ad\") " pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:23 crc kubenswrapper[4747]: I1126 13:30:23.409732 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2"] Nov 26 13:30:23 crc kubenswrapper[4747]: W1126 13:30:23.416290 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78093636_f156_4295_918f_8aa7278c3f69.slice/crio-77c18ba83b788710d5b4555a7a2c342751b2057e44f3d60c6814af8dd82b94ca WatchSource:0}: Error finding container 77c18ba83b788710d5b4555a7a2c342751b2057e44f3d60c6814af8dd82b94ca: Status 404 returned error can't find the container with id 77c18ba83b788710d5b4555a7a2c342751b2057e44f3d60c6814af8dd82b94ca Nov 26 13:30:24 crc kubenswrapper[4747]: I1126 13:30:24.294505 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" event={"ID":"78093636-f156-4295-918f-8aa7278c3f69","Type":"ContainerStarted","Data":"77c18ba83b788710d5b4555a7a2c342751b2057e44f3d60c6814af8dd82b94ca"} Nov 26 13:30:24 crc kubenswrapper[4747]: I1126 13:30:24.487320 4747 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" secret="" err="failed to sync secret cache: timed out waiting for the condition" Nov 26 13:30:24 crc kubenswrapper[4747]: I1126 13:30:24.487683 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:24 crc kubenswrapper[4747]: I1126 13:30:24.697191 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-t5qg9" Nov 26 13:30:24 crc kubenswrapper[4747]: I1126 13:30:24.707704 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc"] Nov 26 13:30:25 crc kubenswrapper[4747]: I1126 13:30:25.300596 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" event={"ID":"1a251542-86df-4644-acf8-6dd3d58697ad","Type":"ContainerStarted","Data":"66bb49d6e1052e6b234ad15680f6d9476c46389eb21226a4f6d56fa45a6ed950"} Nov 26 13:30:27 crc kubenswrapper[4747]: I1126 13:30:27.321095 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" event={"ID":"78093636-f156-4295-918f-8aa7278c3f69","Type":"ContainerStarted","Data":"ce9c811e52a7749889c6f71fdfc3a9cf8a940408894f4c3629a06fcec232afb1"} Nov 26 13:30:27 crc kubenswrapper[4747]: I1126 13:30:27.321465 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:30:27 crc kubenswrapper[4747]: I1126 13:30:27.350083 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" podStartSLOduration=2.270719438 podStartE2EDuration="5.350025506s" podCreationTimestamp="2025-11-26 13:30:22 +0000 UTC" firstStartedPulling="2025-11-26 13:30:23.420174371 +0000 UTC m=+910.406485396" lastFinishedPulling="2025-11-26 13:30:26.499480449 +0000 UTC m=+913.485791464" observedRunningTime="2025-11-26 13:30:27.349348359 +0000 UTC m=+914.335659384" watchObservedRunningTime="2025-11-26 13:30:27.350025506 +0000 UTC m=+914.336336571" Nov 26 13:30:30 crc kubenswrapper[4747]: I1126 13:30:30.338598 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" event={"ID":"1a251542-86df-4644-acf8-6dd3d58697ad","Type":"ContainerStarted","Data":"26ea945d79b7d38d4498f1e37b93c5fd769bbdbede88206434909a7b4e94f57e"} Nov 26 13:30:30 crc kubenswrapper[4747]: I1126 13:30:30.338894 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:30:30 crc kubenswrapper[4747]: I1126 13:30:30.351760 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" podStartSLOduration=2.61489629 podStartE2EDuration="7.351723361s" podCreationTimestamp="2025-11-26 13:30:23 +0000 UTC" firstStartedPulling="2025-11-26 13:30:24.737935302 +0000 UTC m=+911.724246317" lastFinishedPulling="2025-11-26 13:30:29.474762373 +0000 UTC m=+916.461073388" observedRunningTime="2025-11-26 13:30:30.351635219 +0000 UTC m=+917.337946254" watchObservedRunningTime="2025-11-26 13:30:30.351723361 +0000 UTC m=+917.338034376" Nov 26 13:30:33 crc kubenswrapper[4747]: I1126 13:30:33.417846 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:30:33 crc kubenswrapper[4747]: I1126 13:30:33.418155 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:30:44 crc kubenswrapper[4747]: I1126 13:30:44.494429 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-768865bcf6-7wrrc" Nov 26 13:31:03 crc kubenswrapper[4747]: I1126 13:31:03.193631 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6db6754d4-6csp2" Nov 26 13:31:03 crc kubenswrapper[4747]: I1126 13:31:03.417294 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:31:03 crc kubenswrapper[4747]: I1126 13:31:03.417354 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.068337 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-k77x5"] Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.070548 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.076919 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-c67pb" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.086215 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.089444 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-87pwf"] Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.092709 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.095687 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.095957 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.100292 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-k77x5"] Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.158466 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw8zl\" (UniqueName: \"kubernetes.io/projected/45f60cce-70b3-4e45-98a4-c66edcca9e65-kube-api-access-lw8zl\") pod \"frr-k8s-webhook-server-6998585d5-k77x5\" (UID: \"45f60cce-70b3-4e45-98a4-c66edcca9e65\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.158533 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45f60cce-70b3-4e45-98a4-c66edcca9e65-cert\") pod \"frr-k8s-webhook-server-6998585d5-k77x5\" (UID: \"45f60cce-70b3-4e45-98a4-c66edcca9e65\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.214024 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pds5h"] Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.215044 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.216780 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.217031 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.217258 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mnbmm" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.217408 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.259509 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-g98d5"] Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.260226 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw8zl\" (UniqueName: \"kubernetes.io/projected/45f60cce-70b3-4e45-98a4-c66edcca9e65-kube-api-access-lw8zl\") pod \"frr-k8s-webhook-server-6998585d5-k77x5\" (UID: \"45f60cce-70b3-4e45-98a4-c66edcca9e65\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.260290 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a376e01b-77a9-4bcc-af7c-34a3994a5b20-metrics-certs\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.260335 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-frr-sockets\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.260368 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45f60cce-70b3-4e45-98a4-c66edcca9e65-cert\") pod \"frr-k8s-webhook-server-6998585d5-k77x5\" (UID: \"45f60cce-70b3-4e45-98a4-c66edcca9e65\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.260412 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-frr-conf\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.260452 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-reloader\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.260481 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a376e01b-77a9-4bcc-af7c-34a3994a5b20-frr-startup\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.260534 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgb7k\" (UniqueName: \"kubernetes.io/projected/a376e01b-77a9-4bcc-af7c-34a3994a5b20-kube-api-access-mgb7k\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.260580 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-metrics\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: E1126 13:31:04.260726 4747 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 26 13:31:04 crc kubenswrapper[4747]: E1126 13:31:04.260778 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45f60cce-70b3-4e45-98a4-c66edcca9e65-cert podName:45f60cce-70b3-4e45-98a4-c66edcca9e65 nodeName:}" failed. No retries permitted until 2025-11-26 13:31:04.760763012 +0000 UTC m=+951.747074027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/45f60cce-70b3-4e45-98a4-c66edcca9e65-cert") pod "frr-k8s-webhook-server-6998585d5-k77x5" (UID: "45f60cce-70b3-4e45-98a4-c66edcca9e65") : secret "frr-k8s-webhook-server-cert" not found Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.261330 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.266733 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.281977 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-g98d5"] Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.289680 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw8zl\" (UniqueName: \"kubernetes.io/projected/45f60cce-70b3-4e45-98a4-c66edcca9e65-kube-api-access-lw8zl\") pod \"frr-k8s-webhook-server-6998585d5-k77x5\" (UID: \"45f60cce-70b3-4e45-98a4-c66edcca9e65\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.361935 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a376e01b-77a9-4bcc-af7c-34a3994a5b20-metrics-certs\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.361977 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/84ac964b-de2c-43b9-ae3c-c7ea157287dd-metallb-excludel2\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362008 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-frr-sockets\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362041 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0573c54-5abb-4857-8b6d-dcdcc168f1a0-metrics-certs\") pod \"controller-6c7b4b5f48-g98d5\" (UID: \"e0573c54-5abb-4857-8b6d-dcdcc168f1a0\") " pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362073 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-frr-conf\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362088 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0573c54-5abb-4857-8b6d-dcdcc168f1a0-cert\") pod \"controller-6c7b4b5f48-g98d5\" (UID: \"e0573c54-5abb-4857-8b6d-dcdcc168f1a0\") " pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362231 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrqc\" (UniqueName: \"kubernetes.io/projected/e0573c54-5abb-4857-8b6d-dcdcc168f1a0-kube-api-access-twrqc\") pod \"controller-6c7b4b5f48-g98d5\" (UID: \"e0573c54-5abb-4857-8b6d-dcdcc168f1a0\") " pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362300 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-reloader\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362334 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a376e01b-77a9-4bcc-af7c-34a3994a5b20-frr-startup\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362396 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6ll\" (UniqueName: \"kubernetes.io/projected/84ac964b-de2c-43b9-ae3c-c7ea157287dd-kube-api-access-gx6ll\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362468 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgb7k\" (UniqueName: \"kubernetes.io/projected/a376e01b-77a9-4bcc-af7c-34a3994a5b20-kube-api-access-mgb7k\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362475 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-frr-conf\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362496 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-metrics-certs\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362556 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-metrics\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362600 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-memberlist\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362621 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-reloader\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362798 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-frr-sockets\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.362920 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a376e01b-77a9-4bcc-af7c-34a3994a5b20-metrics\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.363449 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a376e01b-77a9-4bcc-af7c-34a3994a5b20-frr-startup\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: E1126 13:31:04.364557 4747 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 26 13:31:04 crc kubenswrapper[4747]: E1126 13:31:04.364696 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a376e01b-77a9-4bcc-af7c-34a3994a5b20-metrics-certs podName:a376e01b-77a9-4bcc-af7c-34a3994a5b20 nodeName:}" failed. No retries permitted until 2025-11-26 13:31:04.864668633 +0000 UTC m=+951.850979648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a376e01b-77a9-4bcc-af7c-34a3994a5b20-metrics-certs") pod "frr-k8s-87pwf" (UID: "a376e01b-77a9-4bcc-af7c-34a3994a5b20") : secret "frr-k8s-certs-secret" not found Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.384810 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgb7k\" (UniqueName: \"kubernetes.io/projected/a376e01b-77a9-4bcc-af7c-34a3994a5b20-kube-api-access-mgb7k\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.463349 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-memberlist\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.463400 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/84ac964b-de2c-43b9-ae3c-c7ea157287dd-metallb-excludel2\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.463432 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0573c54-5abb-4857-8b6d-dcdcc168f1a0-metrics-certs\") pod \"controller-6c7b4b5f48-g98d5\" (UID: \"e0573c54-5abb-4857-8b6d-dcdcc168f1a0\") " pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.463450 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0573c54-5abb-4857-8b6d-dcdcc168f1a0-cert\") pod \"controller-6c7b4b5f48-g98d5\" (UID: \"e0573c54-5abb-4857-8b6d-dcdcc168f1a0\") " pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.463471 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twrqc\" (UniqueName: \"kubernetes.io/projected/e0573c54-5abb-4857-8b6d-dcdcc168f1a0-kube-api-access-twrqc\") pod \"controller-6c7b4b5f48-g98d5\" (UID: \"e0573c54-5abb-4857-8b6d-dcdcc168f1a0\") " pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.463499 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx6ll\" (UniqueName: \"kubernetes.io/projected/84ac964b-de2c-43b9-ae3c-c7ea157287dd-kube-api-access-gx6ll\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: E1126 13:31:04.463510 4747 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.463525 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-metrics-certs\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: E1126 13:31:04.463572 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-memberlist podName:84ac964b-de2c-43b9-ae3c-c7ea157287dd nodeName:}" failed. No retries permitted until 2025-11-26 13:31:04.963557488 +0000 UTC m=+951.949868503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-memberlist") pod "speaker-pds5h" (UID: "84ac964b-de2c-43b9-ae3c-c7ea157287dd") : secret "metallb-memberlist" not found Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.464224 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/84ac964b-de2c-43b9-ae3c-c7ea157287dd-metallb-excludel2\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.466590 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0573c54-5abb-4857-8b6d-dcdcc168f1a0-metrics-certs\") pod \"controller-6c7b4b5f48-g98d5\" (UID: \"e0573c54-5abb-4857-8b6d-dcdcc168f1a0\") " pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.466610 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0573c54-5abb-4857-8b6d-dcdcc168f1a0-cert\") pod \"controller-6c7b4b5f48-g98d5\" (UID: \"e0573c54-5abb-4857-8b6d-dcdcc168f1a0\") " pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.466639 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-metrics-certs\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.478958 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twrqc\" (UniqueName: \"kubernetes.io/projected/e0573c54-5abb-4857-8b6d-dcdcc168f1a0-kube-api-access-twrqc\") pod \"controller-6c7b4b5f48-g98d5\" (UID: \"e0573c54-5abb-4857-8b6d-dcdcc168f1a0\") " pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.490411 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx6ll\" (UniqueName: \"kubernetes.io/projected/84ac964b-de2c-43b9-ae3c-c7ea157287dd-kube-api-access-gx6ll\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.626908 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.773348 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45f60cce-70b3-4e45-98a4-c66edcca9e65-cert\") pod \"frr-k8s-webhook-server-6998585d5-k77x5\" (UID: \"45f60cce-70b3-4e45-98a4-c66edcca9e65\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.777612 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45f60cce-70b3-4e45-98a4-c66edcca9e65-cert\") pod \"frr-k8s-webhook-server-6998585d5-k77x5\" (UID: \"45f60cce-70b3-4e45-98a4-c66edcca9e65\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.875680 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a376e01b-77a9-4bcc-af7c-34a3994a5b20-metrics-certs\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.879725 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a376e01b-77a9-4bcc-af7c-34a3994a5b20-metrics-certs\") pod \"frr-k8s-87pwf\" (UID: \"a376e01b-77a9-4bcc-af7c-34a3994a5b20\") " pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.976753 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-memberlist\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:04 crc kubenswrapper[4747]: E1126 13:31:04.976936 4747 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 26 13:31:04 crc kubenswrapper[4747]: E1126 13:31:04.977010 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-memberlist podName:84ac964b-de2c-43b9-ae3c-c7ea157287dd nodeName:}" failed. No retries permitted until 2025-11-26 13:31:05.976992567 +0000 UTC m=+952.963303592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-memberlist") pod "speaker-pds5h" (UID: "84ac964b-de2c-43b9-ae3c-c7ea157287dd") : secret "metallb-memberlist" not found Nov 26 13:31:04 crc kubenswrapper[4747]: I1126 13:31:04.988231 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.007788 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.065672 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-g98d5"] Nov 26 13:31:05 crc kubenswrapper[4747]: W1126 13:31:05.076554 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0573c54_5abb_4857_8b6d_dcdcc168f1a0.slice/crio-b8c73a15a05362bc2a3c4e0714851993569bd5744e797457d99f7838377db7d5 WatchSource:0}: Error finding container b8c73a15a05362bc2a3c4e0714851993569bd5744e797457d99f7838377db7d5: Status 404 returned error can't find the container with id b8c73a15a05362bc2a3c4e0714851993569bd5744e797457d99f7838377db7d5 Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.095948 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rkv2k"] Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.148733 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkv2k"] Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.148846 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.280818 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-catalog-content\") pod \"redhat-marketplace-rkv2k\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.281127 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-utilities\") pod \"redhat-marketplace-rkv2k\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.281225 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzgb6\" (UniqueName: \"kubernetes.io/projected/914e066f-f867-41d3-972e-ddb30caccc32-kube-api-access-gzgb6\") pod \"redhat-marketplace-rkv2k\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.329572 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-k77x5"] Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.382632 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzgb6\" (UniqueName: \"kubernetes.io/projected/914e066f-f867-41d3-972e-ddb30caccc32-kube-api-access-gzgb6\") pod \"redhat-marketplace-rkv2k\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.383034 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-catalog-content\") pod \"redhat-marketplace-rkv2k\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.383079 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-utilities\") pod \"redhat-marketplace-rkv2k\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.383634 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-utilities\") pod \"redhat-marketplace-rkv2k\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.383677 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-catalog-content\") pod \"redhat-marketplace-rkv2k\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.398986 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzgb6\" (UniqueName: \"kubernetes.io/projected/914e066f-f867-41d3-972e-ddb30caccc32-kube-api-access-gzgb6\") pod \"redhat-marketplace-rkv2k\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.471727 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.552826 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87pwf" event={"ID":"a376e01b-77a9-4bcc-af7c-34a3994a5b20","Type":"ContainerStarted","Data":"685e00ecc7a75b5343001adff24b3b75758363ec743306584ba2bc3b590c8d30"} Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.554866 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" event={"ID":"45f60cce-70b3-4e45-98a4-c66edcca9e65","Type":"ContainerStarted","Data":"1871113e7d831e30804fef4a5f27225747d97ed7c9a74df954e2a82c4c324da5"} Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.555824 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-g98d5" event={"ID":"e0573c54-5abb-4857-8b6d-dcdcc168f1a0","Type":"ContainerStarted","Data":"49f6efc367f116be1863e19c2624020844bc9715076583a80bf29adf8202463b"} Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.555843 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-g98d5" event={"ID":"e0573c54-5abb-4857-8b6d-dcdcc168f1a0","Type":"ContainerStarted","Data":"b8c73a15a05362bc2a3c4e0714851993569bd5744e797457d99f7838377db7d5"} Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.669298 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkv2k"] Nov 26 13:31:05 crc kubenswrapper[4747]: W1126 13:31:05.678146 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod914e066f_f867_41d3_972e_ddb30caccc32.slice/crio-c606aaacbe78661cba6726fa224c328b2edf6394491484af06d944427d02d1eb WatchSource:0}: Error finding container c606aaacbe78661cba6726fa224c328b2edf6394491484af06d944427d02d1eb: Status 404 returned error can't find the container with id c606aaacbe78661cba6726fa224c328b2edf6394491484af06d944427d02d1eb Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.991592 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-memberlist\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:05 crc kubenswrapper[4747]: I1126 13:31:05.997682 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/84ac964b-de2c-43b9-ae3c-c7ea157287dd-memberlist\") pod \"speaker-pds5h\" (UID: \"84ac964b-de2c-43b9-ae3c-c7ea157287dd\") " pod="metallb-system/speaker-pds5h" Nov 26 13:31:06 crc kubenswrapper[4747]: I1126 13:31:06.035229 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pds5h" Nov 26 13:31:06 crc kubenswrapper[4747]: W1126 13:31:06.062174 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84ac964b_de2c_43b9_ae3c_c7ea157287dd.slice/crio-8ba59cba5a8db5a3bd5b10843ad888adcebae36967ee8f096522162f2550c5bf WatchSource:0}: Error finding container 8ba59cba5a8db5a3bd5b10843ad888adcebae36967ee8f096522162f2550c5bf: Status 404 returned error can't find the container with id 8ba59cba5a8db5a3bd5b10843ad888adcebae36967ee8f096522162f2550c5bf Nov 26 13:31:06 crc kubenswrapper[4747]: I1126 13:31:06.568294 4747 generic.go:334] "Generic (PLEG): container finished" podID="914e066f-f867-41d3-972e-ddb30caccc32" containerID="43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b" exitCode=0 Nov 26 13:31:06 crc kubenswrapper[4747]: I1126 13:31:06.568694 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkv2k" event={"ID":"914e066f-f867-41d3-972e-ddb30caccc32","Type":"ContainerDied","Data":"43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b"} Nov 26 13:31:06 crc kubenswrapper[4747]: I1126 13:31:06.568725 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkv2k" event={"ID":"914e066f-f867-41d3-972e-ddb30caccc32","Type":"ContainerStarted","Data":"c606aaacbe78661cba6726fa224c328b2edf6394491484af06d944427d02d1eb"} Nov 26 13:31:06 crc kubenswrapper[4747]: I1126 13:31:06.574675 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pds5h" event={"ID":"84ac964b-de2c-43b9-ae3c-c7ea157287dd","Type":"ContainerStarted","Data":"0920288bd7b16f68035102382a1ccf194ddccfcd3c6d8068219d182bf24fadda"} Nov 26 13:31:06 crc kubenswrapper[4747]: I1126 13:31:06.574719 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pds5h" event={"ID":"84ac964b-de2c-43b9-ae3c-c7ea157287dd","Type":"ContainerStarted","Data":"8ba59cba5a8db5a3bd5b10843ad888adcebae36967ee8f096522162f2550c5bf"} Nov 26 13:31:07 crc kubenswrapper[4747]: I1126 13:31:07.584413 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkv2k" event={"ID":"914e066f-f867-41d3-972e-ddb30caccc32","Type":"ContainerStarted","Data":"5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1"} Nov 26 13:31:08 crc kubenswrapper[4747]: I1126 13:31:08.594470 4747 generic.go:334] "Generic (PLEG): container finished" podID="914e066f-f867-41d3-972e-ddb30caccc32" containerID="5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1" exitCode=0 Nov 26 13:31:08 crc kubenswrapper[4747]: I1126 13:31:08.594547 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkv2k" event={"ID":"914e066f-f867-41d3-972e-ddb30caccc32","Type":"ContainerDied","Data":"5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1"} Nov 26 13:31:09 crc kubenswrapper[4747]: I1126 13:31:09.602118 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pds5h" event={"ID":"84ac964b-de2c-43b9-ae3c-c7ea157287dd","Type":"ContainerStarted","Data":"3d26f6802b28303bcb1899ea5780cedd41ddafdea3388cfdf0bb5856f9d23301"} Nov 26 13:31:09 crc kubenswrapper[4747]: I1126 13:31:09.602510 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pds5h" Nov 26 13:31:09 crc kubenswrapper[4747]: I1126 13:31:09.605720 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkv2k" event={"ID":"914e066f-f867-41d3-972e-ddb30caccc32","Type":"ContainerStarted","Data":"c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5"} Nov 26 13:31:09 crc kubenswrapper[4747]: I1126 13:31:09.611875 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-g98d5" event={"ID":"e0573c54-5abb-4857-8b6d-dcdcc168f1a0","Type":"ContainerStarted","Data":"a0474d016fe020f11df784f950c93fd457f7cbaf7a7cd7280ba0d7bd86e4b085"} Nov 26 13:31:09 crc kubenswrapper[4747]: I1126 13:31:09.612066 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:09 crc kubenswrapper[4747]: I1126 13:31:09.620089 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pds5h" podStartSLOduration=3.401466126 podStartE2EDuration="5.620069802s" podCreationTimestamp="2025-11-26 13:31:04 +0000 UTC" firstStartedPulling="2025-11-26 13:31:06.350882092 +0000 UTC m=+953.337193107" lastFinishedPulling="2025-11-26 13:31:08.569485778 +0000 UTC m=+955.555796783" observedRunningTime="2025-11-26 13:31:09.618259517 +0000 UTC m=+956.604570532" watchObservedRunningTime="2025-11-26 13:31:09.620069802 +0000 UTC m=+956.606380837" Nov 26 13:31:09 crc kubenswrapper[4747]: I1126 13:31:09.639722 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-g98d5" podStartSLOduration=2.385922969 podStartE2EDuration="5.639701764s" podCreationTimestamp="2025-11-26 13:31:04 +0000 UTC" firstStartedPulling="2025-11-26 13:31:05.315359725 +0000 UTC m=+952.301670750" lastFinishedPulling="2025-11-26 13:31:08.56913853 +0000 UTC m=+955.555449545" observedRunningTime="2025-11-26 13:31:09.635105489 +0000 UTC m=+956.621416514" watchObservedRunningTime="2025-11-26 13:31:09.639701764 +0000 UTC m=+956.626012789" Nov 26 13:31:09 crc kubenswrapper[4747]: I1126 13:31:09.654992 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rkv2k" podStartSLOduration=2.234305773 podStartE2EDuration="4.654971736s" podCreationTimestamp="2025-11-26 13:31:05 +0000 UTC" firstStartedPulling="2025-11-26 13:31:06.5713202 +0000 UTC m=+953.557631215" lastFinishedPulling="2025-11-26 13:31:08.991986163 +0000 UTC m=+955.978297178" observedRunningTime="2025-11-26 13:31:09.651895379 +0000 UTC m=+956.638206414" watchObservedRunningTime="2025-11-26 13:31:09.654971736 +0000 UTC m=+956.641282751" Nov 26 13:31:12 crc kubenswrapper[4747]: I1126 13:31:12.629160 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" event={"ID":"45f60cce-70b3-4e45-98a4-c66edcca9e65","Type":"ContainerStarted","Data":"2a8a6274def7024d89c00788fcf6fc89f4d5d8e01b722ad20c34e036b72a88b1"} Nov 26 13:31:12 crc kubenswrapper[4747]: I1126 13:31:12.629741 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:12 crc kubenswrapper[4747]: I1126 13:31:12.631291 4747 generic.go:334] "Generic (PLEG): container finished" podID="a376e01b-77a9-4bcc-af7c-34a3994a5b20" containerID="e86d70753aa58f21701b1b50f6f252825b5ebc0ab77ef3d03cfd0118491abba4" exitCode=0 Nov 26 13:31:12 crc kubenswrapper[4747]: I1126 13:31:12.631327 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87pwf" event={"ID":"a376e01b-77a9-4bcc-af7c-34a3994a5b20","Type":"ContainerDied","Data":"e86d70753aa58f21701b1b50f6f252825b5ebc0ab77ef3d03cfd0118491abba4"} Nov 26 13:31:12 crc kubenswrapper[4747]: I1126 13:31:12.649285 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" podStartSLOduration=1.807041111 podStartE2EDuration="8.649248095s" podCreationTimestamp="2025-11-26 13:31:04 +0000 UTC" firstStartedPulling="2025-11-26 13:31:05.335311405 +0000 UTC m=+952.321622420" lastFinishedPulling="2025-11-26 13:31:12.177518389 +0000 UTC m=+959.163829404" observedRunningTime="2025-11-26 13:31:12.648161618 +0000 UTC m=+959.634472643" watchObservedRunningTime="2025-11-26 13:31:12.649248095 +0000 UTC m=+959.635559140" Nov 26 13:31:13 crc kubenswrapper[4747]: I1126 13:31:13.642431 4747 generic.go:334] "Generic (PLEG): container finished" podID="a376e01b-77a9-4bcc-af7c-34a3994a5b20" containerID="b760ce21c7347ef15fc64a54e799300122c86515e664b5aca47051704031fce1" exitCode=0 Nov 26 13:31:13 crc kubenswrapper[4747]: I1126 13:31:13.642536 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87pwf" event={"ID":"a376e01b-77a9-4bcc-af7c-34a3994a5b20","Type":"ContainerDied","Data":"b760ce21c7347ef15fc64a54e799300122c86515e664b5aca47051704031fce1"} Nov 26 13:31:14 crc kubenswrapper[4747]: I1126 13:31:14.653533 4747 generic.go:334] "Generic (PLEG): container finished" podID="a376e01b-77a9-4bcc-af7c-34a3994a5b20" containerID="4ce93b0a01f5b2baa6de1413d76bcdab7f34b960d60ef300655b6f55a274d1de" exitCode=0 Nov 26 13:31:14 crc kubenswrapper[4747]: I1126 13:31:14.653622 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87pwf" event={"ID":"a376e01b-77a9-4bcc-af7c-34a3994a5b20","Type":"ContainerDied","Data":"4ce93b0a01f5b2baa6de1413d76bcdab7f34b960d60ef300655b6f55a274d1de"} Nov 26 13:31:15 crc kubenswrapper[4747]: I1126 13:31:15.475681 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:15 crc kubenswrapper[4747]: I1126 13:31:15.476315 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:15 crc kubenswrapper[4747]: I1126 13:31:15.541241 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:15 crc kubenswrapper[4747]: I1126 13:31:15.661963 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87pwf" event={"ID":"a376e01b-77a9-4bcc-af7c-34a3994a5b20","Type":"ContainerStarted","Data":"e551cddf7e09e7d05b2f90efa2b451ecdf7e5cf23e7780cde9607bd8bc0fce34"} Nov 26 13:31:15 crc kubenswrapper[4747]: I1126 13:31:15.662077 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87pwf" event={"ID":"a376e01b-77a9-4bcc-af7c-34a3994a5b20","Type":"ContainerStarted","Data":"8848d5bb8c8c14caee9aa869bdb706f0b9406b74b73d82b01c20e9e120de350e"} Nov 26 13:31:15 crc kubenswrapper[4747]: I1126 13:31:15.730648 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:15 crc kubenswrapper[4747]: I1126 13:31:15.785627 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkv2k"] Nov 26 13:31:16 crc kubenswrapper[4747]: I1126 13:31:16.042498 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pds5h" Nov 26 13:31:16 crc kubenswrapper[4747]: I1126 13:31:16.672525 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87pwf" event={"ID":"a376e01b-77a9-4bcc-af7c-34a3994a5b20","Type":"ContainerStarted","Data":"ccd693e67cbe4dda1ba84441802e0bab4d725e43a48ed987142dfc50528346c7"} Nov 26 13:31:16 crc kubenswrapper[4747]: I1126 13:31:16.672771 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87pwf" event={"ID":"a376e01b-77a9-4bcc-af7c-34a3994a5b20","Type":"ContainerStarted","Data":"a2cd4d532d97c4e43ffff1953e2d8ef703a2a59bfc42792f3dfad8db4004d109"} Nov 26 13:31:16 crc kubenswrapper[4747]: I1126 13:31:16.672785 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87pwf" event={"ID":"a376e01b-77a9-4bcc-af7c-34a3994a5b20","Type":"ContainerStarted","Data":"1f03ec4ab4dc88423c989104bfd455a070adf233fdd03c0df95b0e5c2ceff561"} Nov 26 13:31:16 crc kubenswrapper[4747]: I1126 13:31:16.672797 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87pwf" event={"ID":"a376e01b-77a9-4bcc-af7c-34a3994a5b20","Type":"ContainerStarted","Data":"a0b0058a27dae7c3a54666e7668a1852d001165bc5374dec64cd8dfa3d5d65a4"} Nov 26 13:31:16 crc kubenswrapper[4747]: I1126 13:31:16.672817 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:16 crc kubenswrapper[4747]: I1126 13:31:16.702116 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-87pwf" podStartSLOduration=5.790849226 podStartE2EDuration="12.702099028s" podCreationTimestamp="2025-11-26 13:31:04 +0000 UTC" firstStartedPulling="2025-11-26 13:31:05.239271701 +0000 UTC m=+952.225582716" lastFinishedPulling="2025-11-26 13:31:12.150521503 +0000 UTC m=+959.136832518" observedRunningTime="2025-11-26 13:31:16.699955325 +0000 UTC m=+963.686266340" watchObservedRunningTime="2025-11-26 13:31:16.702099028 +0000 UTC m=+963.688410063" Nov 26 13:31:17 crc kubenswrapper[4747]: I1126 13:31:17.686709 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rkv2k" podUID="914e066f-f867-41d3-972e-ddb30caccc32" containerName="registry-server" containerID="cri-o://c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5" gracePeriod=2 Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.100906 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.171508 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-catalog-content\") pod \"914e066f-f867-41d3-972e-ddb30caccc32\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.171567 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-utilities\") pod \"914e066f-f867-41d3-972e-ddb30caccc32\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.171622 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzgb6\" (UniqueName: \"kubernetes.io/projected/914e066f-f867-41d3-972e-ddb30caccc32-kube-api-access-gzgb6\") pod \"914e066f-f867-41d3-972e-ddb30caccc32\" (UID: \"914e066f-f867-41d3-972e-ddb30caccc32\") " Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.173314 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-utilities" (OuterVolumeSpecName: "utilities") pod "914e066f-f867-41d3-972e-ddb30caccc32" (UID: "914e066f-f867-41d3-972e-ddb30caccc32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.182675 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914e066f-f867-41d3-972e-ddb30caccc32-kube-api-access-gzgb6" (OuterVolumeSpecName: "kube-api-access-gzgb6") pod "914e066f-f867-41d3-972e-ddb30caccc32" (UID: "914e066f-f867-41d3-972e-ddb30caccc32"). InnerVolumeSpecName "kube-api-access-gzgb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.194199 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "914e066f-f867-41d3-972e-ddb30caccc32" (UID: "914e066f-f867-41d3-972e-ddb30caccc32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.274381 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.274430 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914e066f-f867-41d3-972e-ddb30caccc32-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.274448 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzgb6\" (UniqueName: \"kubernetes.io/projected/914e066f-f867-41d3-972e-ddb30caccc32-kube-api-access-gzgb6\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.693487 4747 generic.go:334] "Generic (PLEG): container finished" podID="914e066f-f867-41d3-972e-ddb30caccc32" containerID="c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5" exitCode=0 Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.693530 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkv2k" event={"ID":"914e066f-f867-41d3-972e-ddb30caccc32","Type":"ContainerDied","Data":"c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5"} Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.693557 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkv2k" event={"ID":"914e066f-f867-41d3-972e-ddb30caccc32","Type":"ContainerDied","Data":"c606aaacbe78661cba6726fa224c328b2edf6394491484af06d944427d02d1eb"} Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.693578 4747 scope.go:117] "RemoveContainer" containerID="c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.693683 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkv2k" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.722945 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkv2k"] Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.727158 4747 scope.go:117] "RemoveContainer" containerID="5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.727547 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkv2k"] Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.752885 4747 scope.go:117] "RemoveContainer" containerID="43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.776987 4747 scope.go:117] "RemoveContainer" containerID="c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5" Nov 26 13:31:18 crc kubenswrapper[4747]: E1126 13:31:18.777589 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5\": container with ID starting with c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5 not found: ID does not exist" containerID="c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.777617 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5"} err="failed to get container status \"c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5\": rpc error: code = NotFound desc = could not find container \"c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5\": container with ID starting with c5862d3ec895b99d5655d286925c6f5621823e648162c302b8febf842b75b7d5 not found: ID does not exist" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.777637 4747 scope.go:117] "RemoveContainer" containerID="5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1" Nov 26 13:31:18 crc kubenswrapper[4747]: E1126 13:31:18.778177 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1\": container with ID starting with 5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1 not found: ID does not exist" containerID="5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.778305 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1"} err="failed to get container status \"5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1\": rpc error: code = NotFound desc = could not find container \"5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1\": container with ID starting with 5e2282cb6dc599337ba8fae49812621b8a3fba1cc3cf03c123e4db38fe5b2ae1 not found: ID does not exist" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.778419 4747 scope.go:117] "RemoveContainer" containerID="43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b" Nov 26 13:31:18 crc kubenswrapper[4747]: E1126 13:31:18.778946 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b\": container with ID starting with 43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b not found: ID does not exist" containerID="43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b" Nov 26 13:31:18 crc kubenswrapper[4747]: I1126 13:31:18.778996 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b"} err="failed to get container status \"43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b\": rpc error: code = NotFound desc = could not find container \"43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b\": container with ID starting with 43cbacdbdde650360fbc5a1cffdee95ee6489f9cec301e0b66d94f800a68430b not found: ID does not exist" Nov 26 13:31:19 crc kubenswrapper[4747]: I1126 13:31:19.806449 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914e066f-f867-41d3-972e-ddb30caccc32" path="/var/lib/kubelet/pods/914e066f-f867-41d3-972e-ddb30caccc32/volumes" Nov 26 13:31:20 crc kubenswrapper[4747]: I1126 13:31:20.008832 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:20 crc kubenswrapper[4747]: I1126 13:31:20.056865 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:21 crc kubenswrapper[4747]: I1126 13:31:21.999490 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k82mg"] Nov 26 13:31:22 crc kubenswrapper[4747]: E1126 13:31:21.999840 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914e066f-f867-41d3-972e-ddb30caccc32" containerName="extract-utilities" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:21.999862 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="914e066f-f867-41d3-972e-ddb30caccc32" containerName="extract-utilities" Nov 26 13:31:22 crc kubenswrapper[4747]: E1126 13:31:21.999881 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914e066f-f867-41d3-972e-ddb30caccc32" containerName="extract-content" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:21.999894 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="914e066f-f867-41d3-972e-ddb30caccc32" containerName="extract-content" Nov 26 13:31:22 crc kubenswrapper[4747]: E1126 13:31:21.999921 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914e066f-f867-41d3-972e-ddb30caccc32" containerName="registry-server" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:21.999932 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="914e066f-f867-41d3-972e-ddb30caccc32" containerName="registry-server" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.000171 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="914e066f-f867-41d3-972e-ddb30caccc32" containerName="registry-server" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.002527 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.028762 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k82mg"] Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.124122 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-catalog-content\") pod \"certified-operators-k82mg\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.124199 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brjp\" (UniqueName: \"kubernetes.io/projected/d222068b-8060-4d7f-955e-09f0adc01039-kube-api-access-4brjp\") pod \"certified-operators-k82mg\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.124264 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-utilities\") pod \"certified-operators-k82mg\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.225727 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-catalog-content\") pod \"certified-operators-k82mg\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.225800 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brjp\" (UniqueName: \"kubernetes.io/projected/d222068b-8060-4d7f-955e-09f0adc01039-kube-api-access-4brjp\") pod \"certified-operators-k82mg\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.225820 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-utilities\") pod \"certified-operators-k82mg\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.226263 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-catalog-content\") pod \"certified-operators-k82mg\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.226324 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-utilities\") pod \"certified-operators-k82mg\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.250363 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brjp\" (UniqueName: \"kubernetes.io/projected/d222068b-8060-4d7f-955e-09f0adc01039-kube-api-access-4brjp\") pod \"certified-operators-k82mg\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.327302 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.559929 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k82mg"] Nov 26 13:31:22 crc kubenswrapper[4747]: W1126 13:31:22.565589 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd222068b_8060_4d7f_955e_09f0adc01039.slice/crio-451ae18c8e10acd3583c5711034c6b21fe4658559baf8f55228ffc812d92fe94 WatchSource:0}: Error finding container 451ae18c8e10acd3583c5711034c6b21fe4658559baf8f55228ffc812d92fe94: Status 404 returned error can't find the container with id 451ae18c8e10acd3583c5711034c6b21fe4658559baf8f55228ffc812d92fe94 Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.590899 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dfzww"] Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.593084 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.603563 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dfzww"] Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.720236 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k82mg" event={"ID":"d222068b-8060-4d7f-955e-09f0adc01039","Type":"ContainerStarted","Data":"451ae18c8e10acd3583c5711034c6b21fe4658559baf8f55228ffc812d92fe94"} Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.734001 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-utilities\") pod \"community-operators-dfzww\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.734080 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmsn7\" (UniqueName: \"kubernetes.io/projected/8142f539-463b-4093-b71a-590349b1ddba-kube-api-access-xmsn7\") pod \"community-operators-dfzww\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.734298 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-catalog-content\") pod \"community-operators-dfzww\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.835960 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-catalog-content\") pod \"community-operators-dfzww\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.837460 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-utilities\") pod \"community-operators-dfzww\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.837497 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmsn7\" (UniqueName: \"kubernetes.io/projected/8142f539-463b-4093-b71a-590349b1ddba-kube-api-access-xmsn7\") pod \"community-operators-dfzww\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.837932 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-catalog-content\") pod \"community-operators-dfzww\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.838143 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-utilities\") pod \"community-operators-dfzww\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.858255 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmsn7\" (UniqueName: \"kubernetes.io/projected/8142f539-463b-4093-b71a-590349b1ddba-kube-api-access-xmsn7\") pod \"community-operators-dfzww\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:22 crc kubenswrapper[4747]: I1126 13:31:22.950211 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:23 crc kubenswrapper[4747]: I1126 13:31:23.416879 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dfzww"] Nov 26 13:31:23 crc kubenswrapper[4747]: I1126 13:31:23.733681 4747 generic.go:334] "Generic (PLEG): container finished" podID="d222068b-8060-4d7f-955e-09f0adc01039" containerID="18beff32a0404aaad584e9baac678953056b632d2aeb99ebb80fefb05cd45cb2" exitCode=0 Nov 26 13:31:23 crc kubenswrapper[4747]: I1126 13:31:23.733766 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k82mg" event={"ID":"d222068b-8060-4d7f-955e-09f0adc01039","Type":"ContainerDied","Data":"18beff32a0404aaad584e9baac678953056b632d2aeb99ebb80fefb05cd45cb2"} Nov 26 13:31:23 crc kubenswrapper[4747]: I1126 13:31:23.737457 4747 generic.go:334] "Generic (PLEG): container finished" podID="8142f539-463b-4093-b71a-590349b1ddba" containerID="a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6" exitCode=0 Nov 26 13:31:23 crc kubenswrapper[4747]: I1126 13:31:23.737500 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfzww" event={"ID":"8142f539-463b-4093-b71a-590349b1ddba","Type":"ContainerDied","Data":"a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6"} Nov 26 13:31:23 crc kubenswrapper[4747]: I1126 13:31:23.737527 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfzww" event={"ID":"8142f539-463b-4093-b71a-590349b1ddba","Type":"ContainerStarted","Data":"923790fdeaf3d86e8414e0542891ea422b2e13af80895d549cea982da3c842b7"} Nov 26 13:31:24 crc kubenswrapper[4747]: I1126 13:31:24.632784 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-g98d5" Nov 26 13:31:24 crc kubenswrapper[4747]: I1126 13:31:24.749183 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k82mg" event={"ID":"d222068b-8060-4d7f-955e-09f0adc01039","Type":"ContainerStarted","Data":"979ebb2c8aada5b36191bb5ff35be7e6248973df3b9ab868915f88a5edb0112d"} Nov 26 13:31:25 crc kubenswrapper[4747]: I1126 13:31:25.003268 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-k77x5" Nov 26 13:31:25 crc kubenswrapper[4747]: I1126 13:31:25.012238 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-87pwf" Nov 26 13:31:25 crc kubenswrapper[4747]: I1126 13:31:25.756467 4747 generic.go:334] "Generic (PLEG): container finished" podID="8142f539-463b-4093-b71a-590349b1ddba" containerID="9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c" exitCode=0 Nov 26 13:31:25 crc kubenswrapper[4747]: I1126 13:31:25.756563 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfzww" event={"ID":"8142f539-463b-4093-b71a-590349b1ddba","Type":"ContainerDied","Data":"9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c"} Nov 26 13:31:25 crc kubenswrapper[4747]: I1126 13:31:25.759005 4747 generic.go:334] "Generic (PLEG): container finished" podID="d222068b-8060-4d7f-955e-09f0adc01039" containerID="979ebb2c8aada5b36191bb5ff35be7e6248973df3b9ab868915f88a5edb0112d" exitCode=0 Nov 26 13:31:25 crc kubenswrapper[4747]: I1126 13:31:25.759044 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k82mg" event={"ID":"d222068b-8060-4d7f-955e-09f0adc01039","Type":"ContainerDied","Data":"979ebb2c8aada5b36191bb5ff35be7e6248973df3b9ab868915f88a5edb0112d"} Nov 26 13:31:26 crc kubenswrapper[4747]: I1126 13:31:26.766456 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k82mg" event={"ID":"d222068b-8060-4d7f-955e-09f0adc01039","Type":"ContainerStarted","Data":"6b1729ce3c55d8856593202c178989f7f9603bd9b992af5827cd1f5d2ef45b37"} Nov 26 13:31:26 crc kubenswrapper[4747]: I1126 13:31:26.770546 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfzww" event={"ID":"8142f539-463b-4093-b71a-590349b1ddba","Type":"ContainerStarted","Data":"78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752"} Nov 26 13:31:26 crc kubenswrapper[4747]: I1126 13:31:26.791100 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k82mg" podStartSLOduration=3.2853380899999998 podStartE2EDuration="5.791079762s" podCreationTimestamp="2025-11-26 13:31:21 +0000 UTC" firstStartedPulling="2025-11-26 13:31:23.736304088 +0000 UTC m=+970.722615153" lastFinishedPulling="2025-11-26 13:31:26.24204577 +0000 UTC m=+973.228356825" observedRunningTime="2025-11-26 13:31:26.79058959 +0000 UTC m=+973.776900615" watchObservedRunningTime="2025-11-26 13:31:26.791079762 +0000 UTC m=+973.777390787" Nov 26 13:31:26 crc kubenswrapper[4747]: I1126 13:31:26.814087 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dfzww" podStartSLOduration=1.953074695 podStartE2EDuration="4.814070326s" podCreationTimestamp="2025-11-26 13:31:22 +0000 UTC" firstStartedPulling="2025-11-26 13:31:23.7387755 +0000 UTC m=+970.725086525" lastFinishedPulling="2025-11-26 13:31:26.599771131 +0000 UTC m=+973.586082156" observedRunningTime="2025-11-26 13:31:26.810830745 +0000 UTC m=+973.797141770" watchObservedRunningTime="2025-11-26 13:31:26.814070326 +0000 UTC m=+973.800381351" Nov 26 13:31:28 crc kubenswrapper[4747]: I1126 13:31:28.799955 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-gnpfp"] Nov 26 13:31:28 crc kubenswrapper[4747]: I1126 13:31:28.801741 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-gnpfp" Nov 26 13:31:28 crc kubenswrapper[4747]: I1126 13:31:28.805590 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 26 13:31:28 crc kubenswrapper[4747]: I1126 13:31:28.805731 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-tns6q" Nov 26 13:31:28 crc kubenswrapper[4747]: I1126 13:31:28.806761 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 26 13:31:28 crc kubenswrapper[4747]: I1126 13:31:28.817551 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-gnpfp"] Nov 26 13:31:28 crc kubenswrapper[4747]: I1126 13:31:28.916896 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d7md\" (UniqueName: \"kubernetes.io/projected/ce8a44e5-90cc-4c67-a1f5-fa9b751036de-kube-api-access-8d7md\") pod \"mariadb-operator-index-gnpfp\" (UID: \"ce8a44e5-90cc-4c67-a1f5-fa9b751036de\") " pod="openstack-operators/mariadb-operator-index-gnpfp" Nov 26 13:31:29 crc kubenswrapper[4747]: I1126 13:31:29.019031 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d7md\" (UniqueName: \"kubernetes.io/projected/ce8a44e5-90cc-4c67-a1f5-fa9b751036de-kube-api-access-8d7md\") pod \"mariadb-operator-index-gnpfp\" (UID: \"ce8a44e5-90cc-4c67-a1f5-fa9b751036de\") " pod="openstack-operators/mariadb-operator-index-gnpfp" Nov 26 13:31:29 crc kubenswrapper[4747]: I1126 13:31:29.048797 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d7md\" (UniqueName: \"kubernetes.io/projected/ce8a44e5-90cc-4c67-a1f5-fa9b751036de-kube-api-access-8d7md\") pod \"mariadb-operator-index-gnpfp\" (UID: \"ce8a44e5-90cc-4c67-a1f5-fa9b751036de\") " pod="openstack-operators/mariadb-operator-index-gnpfp" Nov 26 13:31:29 crc kubenswrapper[4747]: I1126 13:31:29.123147 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-gnpfp" Nov 26 13:31:29 crc kubenswrapper[4747]: I1126 13:31:29.306010 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-gnpfp"] Nov 26 13:31:29 crc kubenswrapper[4747]: W1126 13:31:29.309658 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce8a44e5_90cc_4c67_a1f5_fa9b751036de.slice/crio-36fd151e0812eee405bfd1b8477cc2f40adaba044a4a0c41ea33b3c1d3796e38 WatchSource:0}: Error finding container 36fd151e0812eee405bfd1b8477cc2f40adaba044a4a0c41ea33b3c1d3796e38: Status 404 returned error can't find the container with id 36fd151e0812eee405bfd1b8477cc2f40adaba044a4a0c41ea33b3c1d3796e38 Nov 26 13:31:29 crc kubenswrapper[4747]: I1126 13:31:29.789278 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-gnpfp" event={"ID":"ce8a44e5-90cc-4c67-a1f5-fa9b751036de","Type":"ContainerStarted","Data":"36fd151e0812eee405bfd1b8477cc2f40adaba044a4a0c41ea33b3c1d3796e38"} Nov 26 13:31:30 crc kubenswrapper[4747]: I1126 13:31:30.795363 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-gnpfp" event={"ID":"ce8a44e5-90cc-4c67-a1f5-fa9b751036de","Type":"ContainerStarted","Data":"23f6e942cb554bd91dbe656fb3e16548e59f6720e2870ad50a1837ee44633cac"} Nov 26 13:31:30 crc kubenswrapper[4747]: I1126 13:31:30.813212 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-gnpfp" podStartSLOduration=1.461196365 podStartE2EDuration="2.813194152s" podCreationTimestamp="2025-11-26 13:31:28 +0000 UTC" firstStartedPulling="2025-11-26 13:31:29.312560268 +0000 UTC m=+976.298871283" lastFinishedPulling="2025-11-26 13:31:30.664558055 +0000 UTC m=+977.650869070" observedRunningTime="2025-11-26 13:31:30.809228463 +0000 UTC m=+977.795539478" watchObservedRunningTime="2025-11-26 13:31:30.813194152 +0000 UTC m=+977.799505167" Nov 26 13:31:32 crc kubenswrapper[4747]: I1126 13:31:32.327589 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:32 crc kubenswrapper[4747]: I1126 13:31:32.328107 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:32 crc kubenswrapper[4747]: I1126 13:31:32.374718 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:32 crc kubenswrapper[4747]: I1126 13:31:32.857965 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:32 crc kubenswrapper[4747]: I1126 13:31:32.951614 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:32 crc kubenswrapper[4747]: I1126 13:31:32.951672 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:32 crc kubenswrapper[4747]: I1126 13:31:32.996137 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:33 crc kubenswrapper[4747]: I1126 13:31:33.417342 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:31:33 crc kubenswrapper[4747]: I1126 13:31:33.417406 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:31:33 crc kubenswrapper[4747]: I1126 13:31:33.417451 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:31:33 crc kubenswrapper[4747]: I1126 13:31:33.418150 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18a51e290df690603e34ca806c79b649af6148fb3c9197b6098b541a9c3c88de"} pod="openshift-machine-config-operator/machine-config-daemon-hjc55" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:31:33 crc kubenswrapper[4747]: I1126 13:31:33.418224 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" containerID="cri-o://18a51e290df690603e34ca806c79b649af6148fb3c9197b6098b541a9c3c88de" gracePeriod=600 Nov 26 13:31:33 crc kubenswrapper[4747]: I1126 13:31:33.820756 4747 generic.go:334] "Generic (PLEG): container finished" podID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerID="18a51e290df690603e34ca806c79b649af6148fb3c9197b6098b541a9c3c88de" exitCode=0 Nov 26 13:31:33 crc kubenswrapper[4747]: I1126 13:31:33.820912 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerDied","Data":"18a51e290df690603e34ca806c79b649af6148fb3c9197b6098b541a9c3c88de"} Nov 26 13:31:33 crc kubenswrapper[4747]: I1126 13:31:33.821485 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"bb44baaeaa989e478f325c470fbc512f8c7deb6097dda428644ec386db6331c1"} Nov 26 13:31:33 crc kubenswrapper[4747]: I1126 13:31:33.821524 4747 scope.go:117] "RemoveContainer" containerID="3e39e361e216e7b4e34fbd9b34e29ae07130ba1170214f6c8535aa4f81b6ff77" Nov 26 13:31:33 crc kubenswrapper[4747]: I1126 13:31:33.862737 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:34 crc kubenswrapper[4747]: I1126 13:31:34.989016 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k82mg"] Nov 26 13:31:34 crc kubenswrapper[4747]: I1126 13:31:34.989492 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k82mg" podUID="d222068b-8060-4d7f-955e-09f0adc01039" containerName="registry-server" containerID="cri-o://6b1729ce3c55d8856593202c178989f7f9603bd9b992af5827cd1f5d2ef45b37" gracePeriod=2 Nov 26 13:31:36 crc kubenswrapper[4747]: I1126 13:31:36.847859 4747 generic.go:334] "Generic (PLEG): container finished" podID="d222068b-8060-4d7f-955e-09f0adc01039" containerID="6b1729ce3c55d8856593202c178989f7f9603bd9b992af5827cd1f5d2ef45b37" exitCode=0 Nov 26 13:31:36 crc kubenswrapper[4747]: I1126 13:31:36.847931 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k82mg" event={"ID":"d222068b-8060-4d7f-955e-09f0adc01039","Type":"ContainerDied","Data":"6b1729ce3c55d8856593202c178989f7f9603bd9b992af5827cd1f5d2ef45b37"} Nov 26 13:31:36 crc kubenswrapper[4747]: I1126 13:31:36.924112 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.039269 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-catalog-content\") pod \"d222068b-8060-4d7f-955e-09f0adc01039\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.039641 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-utilities\") pod \"d222068b-8060-4d7f-955e-09f0adc01039\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.039675 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4brjp\" (UniqueName: \"kubernetes.io/projected/d222068b-8060-4d7f-955e-09f0adc01039-kube-api-access-4brjp\") pod \"d222068b-8060-4d7f-955e-09f0adc01039\" (UID: \"d222068b-8060-4d7f-955e-09f0adc01039\") " Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.040635 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-utilities" (OuterVolumeSpecName: "utilities") pod "d222068b-8060-4d7f-955e-09f0adc01039" (UID: "d222068b-8060-4d7f-955e-09f0adc01039"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.046142 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d222068b-8060-4d7f-955e-09f0adc01039-kube-api-access-4brjp" (OuterVolumeSpecName: "kube-api-access-4brjp") pod "d222068b-8060-4d7f-955e-09f0adc01039" (UID: "d222068b-8060-4d7f-955e-09f0adc01039"). InnerVolumeSpecName "kube-api-access-4brjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.081786 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d222068b-8060-4d7f-955e-09f0adc01039" (UID: "d222068b-8060-4d7f-955e-09f0adc01039"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.140935 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.140980 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4brjp\" (UniqueName: \"kubernetes.io/projected/d222068b-8060-4d7f-955e-09f0adc01039-kube-api-access-4brjp\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.140993 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222068b-8060-4d7f-955e-09f0adc01039-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.792689 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dfzww"] Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.793096 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dfzww" podUID="8142f539-463b-4093-b71a-590349b1ddba" containerName="registry-server" containerID="cri-o://78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752" gracePeriod=2 Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.860917 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k82mg" event={"ID":"d222068b-8060-4d7f-955e-09f0adc01039","Type":"ContainerDied","Data":"451ae18c8e10acd3583c5711034c6b21fe4658559baf8f55228ffc812d92fe94"} Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.860969 4747 scope.go:117] "RemoveContainer" containerID="6b1729ce3c55d8856593202c178989f7f9603bd9b992af5827cd1f5d2ef45b37" Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.861157 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k82mg" Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.992639 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k82mg"] Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.992983 4747 scope.go:117] "RemoveContainer" containerID="979ebb2c8aada5b36191bb5ff35be7e6248973df3b9ab868915f88a5edb0112d" Nov 26 13:31:37 crc kubenswrapper[4747]: I1126 13:31:37.999275 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k82mg"] Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.022286 4747 scope.go:117] "RemoveContainer" containerID="18beff32a0404aaad584e9baac678953056b632d2aeb99ebb80fefb05cd45cb2" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.255252 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.355733 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-utilities\") pod \"8142f539-463b-4093-b71a-590349b1ddba\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.355877 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmsn7\" (UniqueName: \"kubernetes.io/projected/8142f539-463b-4093-b71a-590349b1ddba-kube-api-access-xmsn7\") pod \"8142f539-463b-4093-b71a-590349b1ddba\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.355933 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-catalog-content\") pod \"8142f539-463b-4093-b71a-590349b1ddba\" (UID: \"8142f539-463b-4093-b71a-590349b1ddba\") " Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.356577 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-utilities" (OuterVolumeSpecName: "utilities") pod "8142f539-463b-4093-b71a-590349b1ddba" (UID: "8142f539-463b-4093-b71a-590349b1ddba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.362594 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8142f539-463b-4093-b71a-590349b1ddba-kube-api-access-xmsn7" (OuterVolumeSpecName: "kube-api-access-xmsn7") pod "8142f539-463b-4093-b71a-590349b1ddba" (UID: "8142f539-463b-4093-b71a-590349b1ddba"). InnerVolumeSpecName "kube-api-access-xmsn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.427906 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8142f539-463b-4093-b71a-590349b1ddba" (UID: "8142f539-463b-4093-b71a-590349b1ddba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.458090 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.458126 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmsn7\" (UniqueName: \"kubernetes.io/projected/8142f539-463b-4093-b71a-590349b1ddba-kube-api-access-xmsn7\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.458142 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8142f539-463b-4093-b71a-590349b1ddba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.870310 4747 generic.go:334] "Generic (PLEG): container finished" podID="8142f539-463b-4093-b71a-590349b1ddba" containerID="78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752" exitCode=0 Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.870371 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfzww" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.870378 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfzww" event={"ID":"8142f539-463b-4093-b71a-590349b1ddba","Type":"ContainerDied","Data":"78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752"} Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.870888 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfzww" event={"ID":"8142f539-463b-4093-b71a-590349b1ddba","Type":"ContainerDied","Data":"923790fdeaf3d86e8414e0542891ea422b2e13af80895d549cea982da3c842b7"} Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.870920 4747 scope.go:117] "RemoveContainer" containerID="78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.892221 4747 scope.go:117] "RemoveContainer" containerID="9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.924477 4747 scope.go:117] "RemoveContainer" containerID="a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.928328 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dfzww"] Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.950910 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dfzww"] Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.968166 4747 scope.go:117] "RemoveContainer" containerID="78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752" Nov 26 13:31:38 crc kubenswrapper[4747]: E1126 13:31:38.968604 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752\": container with ID starting with 78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752 not found: ID does not exist" containerID="78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.968689 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752"} err="failed to get container status \"78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752\": rpc error: code = NotFound desc = could not find container \"78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752\": container with ID starting with 78ce931e8efcfbb4772afa4eadba47e7cfd725bb168de1e3b5e9f14be44e5752 not found: ID does not exist" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.968818 4747 scope.go:117] "RemoveContainer" containerID="9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c" Nov 26 13:31:38 crc kubenswrapper[4747]: E1126 13:31:38.969458 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c\": container with ID starting with 9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c not found: ID does not exist" containerID="9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.969491 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c"} err="failed to get container status \"9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c\": rpc error: code = NotFound desc = could not find container \"9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c\": container with ID starting with 9c4cd22941fac8833fa47bbde5016f8bf76c101682409b476af43abde1fb120c not found: ID does not exist" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.969512 4747 scope.go:117] "RemoveContainer" containerID="a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6" Nov 26 13:31:38 crc kubenswrapper[4747]: E1126 13:31:38.969894 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6\": container with ID starting with a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6 not found: ID does not exist" containerID="a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6" Nov 26 13:31:38 crc kubenswrapper[4747]: I1126 13:31:38.969942 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6"} err="failed to get container status \"a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6\": rpc error: code = NotFound desc = could not find container \"a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6\": container with ID starting with a82048764a0e55eba18527ffc510e68efc116a7b4041b83d1dac9060b0c1afa6 not found: ID does not exist" Nov 26 13:31:39 crc kubenswrapper[4747]: I1126 13:31:39.124384 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-gnpfp" Nov 26 13:31:39 crc kubenswrapper[4747]: I1126 13:31:39.124466 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-gnpfp" Nov 26 13:31:39 crc kubenswrapper[4747]: I1126 13:31:39.151976 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-gnpfp" Nov 26 13:31:39 crc kubenswrapper[4747]: I1126 13:31:39.805512 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8142f539-463b-4093-b71a-590349b1ddba" path="/var/lib/kubelet/pods/8142f539-463b-4093-b71a-590349b1ddba/volumes" Nov 26 13:31:39 crc kubenswrapper[4747]: I1126 13:31:39.806275 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d222068b-8060-4d7f-955e-09f0adc01039" path="/var/lib/kubelet/pods/d222068b-8060-4d7f-955e-09f0adc01039/volumes" Nov 26 13:31:39 crc kubenswrapper[4747]: I1126 13:31:39.914812 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-gnpfp" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.420379 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr"] Nov 26 13:31:41 crc kubenswrapper[4747]: E1126 13:31:41.420867 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d222068b-8060-4d7f-955e-09f0adc01039" containerName="registry-server" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.420883 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d222068b-8060-4d7f-955e-09f0adc01039" containerName="registry-server" Nov 26 13:31:41 crc kubenswrapper[4747]: E1126 13:31:41.420893 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8142f539-463b-4093-b71a-590349b1ddba" containerName="registry-server" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.420900 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8142f539-463b-4093-b71a-590349b1ddba" containerName="registry-server" Nov 26 13:31:41 crc kubenswrapper[4747]: E1126 13:31:41.420910 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8142f539-463b-4093-b71a-590349b1ddba" containerName="extract-content" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.420917 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8142f539-463b-4093-b71a-590349b1ddba" containerName="extract-content" Nov 26 13:31:41 crc kubenswrapper[4747]: E1126 13:31:41.420926 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d222068b-8060-4d7f-955e-09f0adc01039" containerName="extract-utilities" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.420931 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d222068b-8060-4d7f-955e-09f0adc01039" containerName="extract-utilities" Nov 26 13:31:41 crc kubenswrapper[4747]: E1126 13:31:41.420940 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8142f539-463b-4093-b71a-590349b1ddba" containerName="extract-utilities" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.420946 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8142f539-463b-4093-b71a-590349b1ddba" containerName="extract-utilities" Nov 26 13:31:41 crc kubenswrapper[4747]: E1126 13:31:41.420965 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d222068b-8060-4d7f-955e-09f0adc01039" containerName="extract-content" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.420970 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d222068b-8060-4d7f-955e-09f0adc01039" containerName="extract-content" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.421106 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="8142f539-463b-4093-b71a-590349b1ddba" containerName="registry-server" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.421122 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d222068b-8060-4d7f-955e-09f0adc01039" containerName="registry-server" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.421802 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.423631 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rm7vv" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.429589 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr"] Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.494175 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-util\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.494296 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g86wh\" (UniqueName: \"kubernetes.io/projected/751ef444-9117-4506-bf1e-9d7605d07991-kube-api-access-g86wh\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.494385 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-bundle\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.600014 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g86wh\" (UniqueName: \"kubernetes.io/projected/751ef444-9117-4506-bf1e-9d7605d07991-kube-api-access-g86wh\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.600123 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-bundle\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.600190 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-util\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.600798 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-util\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.600833 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-bundle\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.624398 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g86wh\" (UniqueName: \"kubernetes.io/projected/751ef444-9117-4506-bf1e-9d7605d07991-kube-api-access-g86wh\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:41 crc kubenswrapper[4747]: I1126 13:31:41.739301 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:42 crc kubenswrapper[4747]: I1126 13:31:42.201204 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr"] Nov 26 13:31:42 crc kubenswrapper[4747]: I1126 13:31:42.906447 4747 generic.go:334] "Generic (PLEG): container finished" podID="751ef444-9117-4506-bf1e-9d7605d07991" containerID="f61423b7ada5a3d9d3635aaf7892cf5b19ab359405cb4a476480d5ad44fa1340" exitCode=0 Nov 26 13:31:42 crc kubenswrapper[4747]: I1126 13:31:42.906548 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" event={"ID":"751ef444-9117-4506-bf1e-9d7605d07991","Type":"ContainerDied","Data":"f61423b7ada5a3d9d3635aaf7892cf5b19ab359405cb4a476480d5ad44fa1340"} Nov 26 13:31:42 crc kubenswrapper[4747]: I1126 13:31:42.906815 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" event={"ID":"751ef444-9117-4506-bf1e-9d7605d07991","Type":"ContainerStarted","Data":"56e4caf3c1412028773058d2fdabbcac600b66b9b323dd5681e12933d577f5b6"} Nov 26 13:31:43 crc kubenswrapper[4747]: I1126 13:31:43.918256 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" event={"ID":"751ef444-9117-4506-bf1e-9d7605d07991","Type":"ContainerStarted","Data":"c93d4bec032427f6ddf698491d8884c17a1294e766fabc5e079bcd888d9c0fd5"} Nov 26 13:31:44 crc kubenswrapper[4747]: I1126 13:31:44.933752 4747 generic.go:334] "Generic (PLEG): container finished" podID="751ef444-9117-4506-bf1e-9d7605d07991" containerID="c93d4bec032427f6ddf698491d8884c17a1294e766fabc5e079bcd888d9c0fd5" exitCode=0 Nov 26 13:31:44 crc kubenswrapper[4747]: I1126 13:31:44.933816 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" event={"ID":"751ef444-9117-4506-bf1e-9d7605d07991","Type":"ContainerDied","Data":"c93d4bec032427f6ddf698491d8884c17a1294e766fabc5e079bcd888d9c0fd5"} Nov 26 13:31:45 crc kubenswrapper[4747]: I1126 13:31:45.939608 4747 generic.go:334] "Generic (PLEG): container finished" podID="751ef444-9117-4506-bf1e-9d7605d07991" containerID="284559728aef8bc3c03aeb653e0b4861a715fc09534c717a958b7b331d238124" exitCode=0 Nov 26 13:31:45 crc kubenswrapper[4747]: I1126 13:31:45.939715 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" event={"ID":"751ef444-9117-4506-bf1e-9d7605d07991","Type":"ContainerDied","Data":"284559728aef8bc3c03aeb653e0b4861a715fc09534c717a958b7b331d238124"} Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.225642 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.368999 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-util\") pod \"751ef444-9117-4506-bf1e-9d7605d07991\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.369430 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-bundle\") pod \"751ef444-9117-4506-bf1e-9d7605d07991\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.369481 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g86wh\" (UniqueName: \"kubernetes.io/projected/751ef444-9117-4506-bf1e-9d7605d07991-kube-api-access-g86wh\") pod \"751ef444-9117-4506-bf1e-9d7605d07991\" (UID: \"751ef444-9117-4506-bf1e-9d7605d07991\") " Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.370232 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-bundle" (OuterVolumeSpecName: "bundle") pod "751ef444-9117-4506-bf1e-9d7605d07991" (UID: "751ef444-9117-4506-bf1e-9d7605d07991"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.375629 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751ef444-9117-4506-bf1e-9d7605d07991-kube-api-access-g86wh" (OuterVolumeSpecName: "kube-api-access-g86wh") pod "751ef444-9117-4506-bf1e-9d7605d07991" (UID: "751ef444-9117-4506-bf1e-9d7605d07991"). InnerVolumeSpecName "kube-api-access-g86wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.385598 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-util" (OuterVolumeSpecName: "util") pod "751ef444-9117-4506-bf1e-9d7605d07991" (UID: "751ef444-9117-4506-bf1e-9d7605d07991"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.470915 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g86wh\" (UniqueName: \"kubernetes.io/projected/751ef444-9117-4506-bf1e-9d7605d07991-kube-api-access-g86wh\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.470958 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.470969 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/751ef444-9117-4506-bf1e-9d7605d07991-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.967423 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" event={"ID":"751ef444-9117-4506-bf1e-9d7605d07991","Type":"ContainerDied","Data":"56e4caf3c1412028773058d2fdabbcac600b66b9b323dd5681e12933d577f5b6"} Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.967500 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56e4caf3c1412028773058d2fdabbcac600b66b9b323dd5681e12933d577f5b6" Nov 26 13:31:47 crc kubenswrapper[4747]: I1126 13:31:47.967611 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.385414 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9"] Nov 26 13:31:51 crc kubenswrapper[4747]: E1126 13:31:51.385827 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751ef444-9117-4506-bf1e-9d7605d07991" containerName="pull" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.385838 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="751ef444-9117-4506-bf1e-9d7605d07991" containerName="pull" Nov 26 13:31:51 crc kubenswrapper[4747]: E1126 13:31:51.385850 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751ef444-9117-4506-bf1e-9d7605d07991" containerName="util" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.385855 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="751ef444-9117-4506-bf1e-9d7605d07991" containerName="util" Nov 26 13:31:51 crc kubenswrapper[4747]: E1126 13:31:51.385868 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751ef444-9117-4506-bf1e-9d7605d07991" containerName="extract" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.385874 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="751ef444-9117-4506-bf1e-9d7605d07991" containerName="extract" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.385982 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="751ef444-9117-4506-bf1e-9d7605d07991" containerName="extract" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.386387 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.388437 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.390305 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9bv2g" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.391033 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.422232 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/399f9a79-e1f0-4fcd-8378-f0dee4361f7c-webhook-cert\") pod \"mariadb-operator-controller-manager-d6b588f4-b64k9\" (UID: \"399f9a79-e1f0-4fcd-8378-f0dee4361f7c\") " pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.422340 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/399f9a79-e1f0-4fcd-8378-f0dee4361f7c-apiservice-cert\") pod \"mariadb-operator-controller-manager-d6b588f4-b64k9\" (UID: \"399f9a79-e1f0-4fcd-8378-f0dee4361f7c\") " pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.422370 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzg9\" (UniqueName: \"kubernetes.io/projected/399f9a79-e1f0-4fcd-8378-f0dee4361f7c-kube-api-access-lqzg9\") pod \"mariadb-operator-controller-manager-d6b588f4-b64k9\" (UID: \"399f9a79-e1f0-4fcd-8378-f0dee4361f7c\") " pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.444740 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9"] Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.525020 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/399f9a79-e1f0-4fcd-8378-f0dee4361f7c-webhook-cert\") pod \"mariadb-operator-controller-manager-d6b588f4-b64k9\" (UID: \"399f9a79-e1f0-4fcd-8378-f0dee4361f7c\") " pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.525148 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/399f9a79-e1f0-4fcd-8378-f0dee4361f7c-apiservice-cert\") pod \"mariadb-operator-controller-manager-d6b588f4-b64k9\" (UID: \"399f9a79-e1f0-4fcd-8378-f0dee4361f7c\") " pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.525174 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqzg9\" (UniqueName: \"kubernetes.io/projected/399f9a79-e1f0-4fcd-8378-f0dee4361f7c-kube-api-access-lqzg9\") pod \"mariadb-operator-controller-manager-d6b588f4-b64k9\" (UID: \"399f9a79-e1f0-4fcd-8378-f0dee4361f7c\") " pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.531439 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/399f9a79-e1f0-4fcd-8378-f0dee4361f7c-apiservice-cert\") pod \"mariadb-operator-controller-manager-d6b588f4-b64k9\" (UID: \"399f9a79-e1f0-4fcd-8378-f0dee4361f7c\") " pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.534813 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/399f9a79-e1f0-4fcd-8378-f0dee4361f7c-webhook-cert\") pod \"mariadb-operator-controller-manager-d6b588f4-b64k9\" (UID: \"399f9a79-e1f0-4fcd-8378-f0dee4361f7c\") " pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.550534 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqzg9\" (UniqueName: \"kubernetes.io/projected/399f9a79-e1f0-4fcd-8378-f0dee4361f7c-kube-api-access-lqzg9\") pod \"mariadb-operator-controller-manager-d6b588f4-b64k9\" (UID: \"399f9a79-e1f0-4fcd-8378-f0dee4361f7c\") " pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.704494 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.929478 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9"] Nov 26 13:31:51 crc kubenswrapper[4747]: I1126 13:31:51.994809 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" event={"ID":"399f9a79-e1f0-4fcd-8378-f0dee4361f7c","Type":"ContainerStarted","Data":"cc9470f7ddc2d1f6ca96c7b28bc8cc4e0a2f2483ab026335b96139313f885f17"} Nov 26 13:31:57 crc kubenswrapper[4747]: I1126 13:31:57.028723 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" event={"ID":"399f9a79-e1f0-4fcd-8378-f0dee4361f7c","Type":"ContainerStarted","Data":"ee197bd6158dfdb0032dbce5c490498ee6fcc7357ed639720b6aa240d22c8e9c"} Nov 26 13:31:57 crc kubenswrapper[4747]: I1126 13:31:57.030520 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:31:57 crc kubenswrapper[4747]: I1126 13:31:57.054354 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" podStartSLOduration=2.130361594 podStartE2EDuration="6.054335056s" podCreationTimestamp="2025-11-26 13:31:51 +0000 UTC" firstStartedPulling="2025-11-26 13:31:51.949194986 +0000 UTC m=+998.935506001" lastFinishedPulling="2025-11-26 13:31:55.873168448 +0000 UTC m=+1002.859479463" observedRunningTime="2025-11-26 13:31:57.048270775 +0000 UTC m=+1004.034581810" watchObservedRunningTime="2025-11-26 13:31:57.054335056 +0000 UTC m=+1004.040646081" Nov 26 13:32:01 crc kubenswrapper[4747]: I1126 13:32:01.709351 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-d6b588f4-b64k9" Nov 26 13:32:06 crc kubenswrapper[4747]: I1126 13:32:06.495818 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-ql97f"] Nov 26 13:32:06 crc kubenswrapper[4747]: I1126 13:32:06.497497 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-ql97f" Nov 26 13:32:06 crc kubenswrapper[4747]: I1126 13:32:06.499816 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-s7kh4" Nov 26 13:32:06 crc kubenswrapper[4747]: I1126 13:32:06.503551 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-ql97f"] Nov 26 13:32:06 crc kubenswrapper[4747]: I1126 13:32:06.648799 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq2rq\" (UniqueName: \"kubernetes.io/projected/86d43f71-ece8-4aaf-a045-d78042075eab-kube-api-access-hq2rq\") pod \"infra-operator-index-ql97f\" (UID: \"86d43f71-ece8-4aaf-a045-d78042075eab\") " pod="openstack-operators/infra-operator-index-ql97f" Nov 26 13:32:06 crc kubenswrapper[4747]: I1126 13:32:06.750387 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq2rq\" (UniqueName: \"kubernetes.io/projected/86d43f71-ece8-4aaf-a045-d78042075eab-kube-api-access-hq2rq\") pod \"infra-operator-index-ql97f\" (UID: \"86d43f71-ece8-4aaf-a045-d78042075eab\") " pod="openstack-operators/infra-operator-index-ql97f" Nov 26 13:32:06 crc kubenswrapper[4747]: I1126 13:32:06.778561 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq2rq\" (UniqueName: \"kubernetes.io/projected/86d43f71-ece8-4aaf-a045-d78042075eab-kube-api-access-hq2rq\") pod \"infra-operator-index-ql97f\" (UID: \"86d43f71-ece8-4aaf-a045-d78042075eab\") " pod="openstack-operators/infra-operator-index-ql97f" Nov 26 13:32:06 crc kubenswrapper[4747]: I1126 13:32:06.815249 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-ql97f" Nov 26 13:32:07 crc kubenswrapper[4747]: W1126 13:32:07.274662 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86d43f71_ece8_4aaf_a045_d78042075eab.slice/crio-b0563b12a780525b5cdee9c8fb21cb37d574614f549359e00c579697ea44950d WatchSource:0}: Error finding container b0563b12a780525b5cdee9c8fb21cb37d574614f549359e00c579697ea44950d: Status 404 returned error can't find the container with id b0563b12a780525b5cdee9c8fb21cb37d574614f549359e00c579697ea44950d Nov 26 13:32:07 crc kubenswrapper[4747]: I1126 13:32:07.275598 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-ql97f"] Nov 26 13:32:08 crc kubenswrapper[4747]: I1126 13:32:08.091872 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-ql97f" event={"ID":"86d43f71-ece8-4aaf-a045-d78042075eab","Type":"ContainerStarted","Data":"b0563b12a780525b5cdee9c8fb21cb37d574614f549359e00c579697ea44950d"} Nov 26 13:32:09 crc kubenswrapper[4747]: I1126 13:32:09.099498 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-ql97f" event={"ID":"86d43f71-ece8-4aaf-a045-d78042075eab","Type":"ContainerStarted","Data":"5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea"} Nov 26 13:32:09 crc kubenswrapper[4747]: I1126 13:32:09.115949 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-ql97f" podStartSLOduration=2.051710275 podStartE2EDuration="3.115930507s" podCreationTimestamp="2025-11-26 13:32:06 +0000 UTC" firstStartedPulling="2025-11-26 13:32:07.276611334 +0000 UTC m=+1014.262922359" lastFinishedPulling="2025-11-26 13:32:08.340831566 +0000 UTC m=+1015.327142591" observedRunningTime="2025-11-26 13:32:09.114004049 +0000 UTC m=+1016.100315064" watchObservedRunningTime="2025-11-26 13:32:09.115930507 +0000 UTC m=+1016.102241522" Nov 26 13:32:10 crc kubenswrapper[4747]: I1126 13:32:10.493126 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-ql97f"] Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.088333 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-s2drq"] Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.089369 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-s2drq" Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.095954 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-s2drq"] Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.114088 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-ql97f" podUID="86d43f71-ece8-4aaf-a045-d78042075eab" containerName="registry-server" containerID="cri-o://5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea" gracePeriod=2 Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.211826 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7nm\" (UniqueName: \"kubernetes.io/projected/ee9a1424-ac63-4784-bed5-19b03117eb93-kube-api-access-4c7nm\") pod \"infra-operator-index-s2drq\" (UID: \"ee9a1424-ac63-4784-bed5-19b03117eb93\") " pod="openstack-operators/infra-operator-index-s2drq" Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.312875 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7nm\" (UniqueName: \"kubernetes.io/projected/ee9a1424-ac63-4784-bed5-19b03117eb93-kube-api-access-4c7nm\") pod \"infra-operator-index-s2drq\" (UID: \"ee9a1424-ac63-4784-bed5-19b03117eb93\") " pod="openstack-operators/infra-operator-index-s2drq" Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.342796 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7nm\" (UniqueName: \"kubernetes.io/projected/ee9a1424-ac63-4784-bed5-19b03117eb93-kube-api-access-4c7nm\") pod \"infra-operator-index-s2drq\" (UID: \"ee9a1424-ac63-4784-bed5-19b03117eb93\") " pod="openstack-operators/infra-operator-index-s2drq" Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.404218 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-s2drq" Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.466754 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-ql97f" Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.617067 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq2rq\" (UniqueName: \"kubernetes.io/projected/86d43f71-ece8-4aaf-a045-d78042075eab-kube-api-access-hq2rq\") pod \"86d43f71-ece8-4aaf-a045-d78042075eab\" (UID: \"86d43f71-ece8-4aaf-a045-d78042075eab\") " Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.621266 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d43f71-ece8-4aaf-a045-d78042075eab-kube-api-access-hq2rq" (OuterVolumeSpecName: "kube-api-access-hq2rq") pod "86d43f71-ece8-4aaf-a045-d78042075eab" (UID: "86d43f71-ece8-4aaf-a045-d78042075eab"). InnerVolumeSpecName "kube-api-access-hq2rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.718706 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq2rq\" (UniqueName: \"kubernetes.io/projected/86d43f71-ece8-4aaf-a045-d78042075eab-kube-api-access-hq2rq\") on node \"crc\" DevicePath \"\"" Nov 26 13:32:11 crc kubenswrapper[4747]: I1126 13:32:11.826427 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-s2drq"] Nov 26 13:32:11 crc kubenswrapper[4747]: W1126 13:32:11.838462 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee9a1424_ac63_4784_bed5_19b03117eb93.slice/crio-4359717a54f27061a6b3a70fcb860ae4ce2b238c1566c2f34daf625d66e29ee9 WatchSource:0}: Error finding container 4359717a54f27061a6b3a70fcb860ae4ce2b238c1566c2f34daf625d66e29ee9: Status 404 returned error can't find the container with id 4359717a54f27061a6b3a70fcb860ae4ce2b238c1566c2f34daf625d66e29ee9 Nov 26 13:32:12 crc kubenswrapper[4747]: I1126 13:32:12.125195 4747 generic.go:334] "Generic (PLEG): container finished" podID="86d43f71-ece8-4aaf-a045-d78042075eab" containerID="5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea" exitCode=0 Nov 26 13:32:12 crc kubenswrapper[4747]: I1126 13:32:12.125257 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-ql97f" Nov 26 13:32:12 crc kubenswrapper[4747]: I1126 13:32:12.125264 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-ql97f" event={"ID":"86d43f71-ece8-4aaf-a045-d78042075eab","Type":"ContainerDied","Data":"5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea"} Nov 26 13:32:12 crc kubenswrapper[4747]: I1126 13:32:12.125351 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-ql97f" event={"ID":"86d43f71-ece8-4aaf-a045-d78042075eab","Type":"ContainerDied","Data":"b0563b12a780525b5cdee9c8fb21cb37d574614f549359e00c579697ea44950d"} Nov 26 13:32:12 crc kubenswrapper[4747]: I1126 13:32:12.125382 4747 scope.go:117] "RemoveContainer" containerID="5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea" Nov 26 13:32:12 crc kubenswrapper[4747]: I1126 13:32:12.127118 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-s2drq" event={"ID":"ee9a1424-ac63-4784-bed5-19b03117eb93","Type":"ContainerStarted","Data":"4359717a54f27061a6b3a70fcb860ae4ce2b238c1566c2f34daf625d66e29ee9"} Nov 26 13:32:12 crc kubenswrapper[4747]: I1126 13:32:12.154886 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-ql97f"] Nov 26 13:32:12 crc kubenswrapper[4747]: I1126 13:32:12.158280 4747 scope.go:117] "RemoveContainer" containerID="5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea" Nov 26 13:32:12 crc kubenswrapper[4747]: E1126 13:32:12.158732 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea\": container with ID starting with 5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea not found: ID does not exist" containerID="5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea" Nov 26 13:32:12 crc kubenswrapper[4747]: I1126 13:32:12.158769 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea"} err="failed to get container status \"5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea\": rpc error: code = NotFound desc = could not find container \"5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea\": container with ID starting with 5fad485b5c874974c23f45f329fa5d723ee3328b5845a3a5a7dab33d0555a9ea not found: ID does not exist" Nov 26 13:32:12 crc kubenswrapper[4747]: I1126 13:32:12.161339 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-ql97f"] Nov 26 13:32:13 crc kubenswrapper[4747]: I1126 13:32:13.137048 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-s2drq" event={"ID":"ee9a1424-ac63-4784-bed5-19b03117eb93","Type":"ContainerStarted","Data":"01f6f1c121be2a9e818062b96e2548b9166fc759b56507ac7a4a0aa360945f63"} Nov 26 13:32:13 crc kubenswrapper[4747]: I1126 13:32:13.156783 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-s2drq" podStartSLOduration=1.6008320660000002 podStartE2EDuration="2.156762751s" podCreationTimestamp="2025-11-26 13:32:11 +0000 UTC" firstStartedPulling="2025-11-26 13:32:11.841841659 +0000 UTC m=+1018.828152674" lastFinishedPulling="2025-11-26 13:32:12.397772314 +0000 UTC m=+1019.384083359" observedRunningTime="2025-11-26 13:32:13.152109405 +0000 UTC m=+1020.138420420" watchObservedRunningTime="2025-11-26 13:32:13.156762751 +0000 UTC m=+1020.143073766" Nov 26 13:32:13 crc kubenswrapper[4747]: I1126 13:32:13.814117 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d43f71-ece8-4aaf-a045-d78042075eab" path="/var/lib/kubelet/pods/86d43f71-ece8-4aaf-a045-d78042075eab/volumes" Nov 26 13:32:21 crc kubenswrapper[4747]: I1126 13:32:21.404906 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-s2drq" Nov 26 13:32:21 crc kubenswrapper[4747]: I1126 13:32:21.407441 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-s2drq" Nov 26 13:32:21 crc kubenswrapper[4747]: I1126 13:32:21.448032 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-s2drq" Nov 26 13:32:22 crc kubenswrapper[4747]: I1126 13:32:22.227521 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-s2drq" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.138543 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf"] Nov 26 13:32:24 crc kubenswrapper[4747]: E1126 13:32:24.140225 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d43f71-ece8-4aaf-a045-d78042075eab" containerName="registry-server" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.140393 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d43f71-ece8-4aaf-a045-d78042075eab" containerName="registry-server" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.140733 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d43f71-ece8-4aaf-a045-d78042075eab" containerName="registry-server" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.142295 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.146000 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rm7vv" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.154600 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf"] Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.243628 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxsl6\" (UniqueName: \"kubernetes.io/projected/da964da5-781d-46be-a7f8-3f0151d77c22-kube-api-access-pxsl6\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.243898 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-bundle\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.244101 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-util\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.345398 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-util\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.345534 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxsl6\" (UniqueName: \"kubernetes.io/projected/da964da5-781d-46be-a7f8-3f0151d77c22-kube-api-access-pxsl6\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.345685 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-bundle\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.346330 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-bundle\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.346329 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-util\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.371258 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxsl6\" (UniqueName: \"kubernetes.io/projected/da964da5-781d-46be-a7f8-3f0151d77c22-kube-api-access-pxsl6\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.475017 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:24 crc kubenswrapper[4747]: I1126 13:32:24.710623 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf"] Nov 26 13:32:25 crc kubenswrapper[4747]: I1126 13:32:25.252911 4747 generic.go:334] "Generic (PLEG): container finished" podID="da964da5-781d-46be-a7f8-3f0151d77c22" containerID="b981b3b86051973526fc3e1427be006e11d8f09d523fdc67257933d5840e2f1d" exitCode=0 Nov 26 13:32:25 crc kubenswrapper[4747]: I1126 13:32:25.252988 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" event={"ID":"da964da5-781d-46be-a7f8-3f0151d77c22","Type":"ContainerDied","Data":"b981b3b86051973526fc3e1427be006e11d8f09d523fdc67257933d5840e2f1d"} Nov 26 13:32:25 crc kubenswrapper[4747]: I1126 13:32:25.253042 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" event={"ID":"da964da5-781d-46be-a7f8-3f0151d77c22","Type":"ContainerStarted","Data":"2b47e9fa55e0b20b12a4adb2cf95b6a3698b53df7ca956f53755104da4379911"} Nov 26 13:32:27 crc kubenswrapper[4747]: I1126 13:32:27.269343 4747 generic.go:334] "Generic (PLEG): container finished" podID="da964da5-781d-46be-a7f8-3f0151d77c22" containerID="ade4fe9d754aa729645b8e933e5305206c69d44a7d72d49aab61add0e74a25b3" exitCode=0 Nov 26 13:32:27 crc kubenswrapper[4747]: I1126 13:32:27.269421 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" event={"ID":"da964da5-781d-46be-a7f8-3f0151d77c22","Type":"ContainerDied","Data":"ade4fe9d754aa729645b8e933e5305206c69d44a7d72d49aab61add0e74a25b3"} Nov 26 13:32:27 crc kubenswrapper[4747]: E1126 13:32:27.579012 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda964da5_781d_46be_a7f8_3f0151d77c22.slice/crio-403fb35d80d5d4fc19415ae8e1cb0287e22f3e807ae40594537ad59ea35ff43b.scope\": RecentStats: unable to find data in memory cache]" Nov 26 13:32:28 crc kubenswrapper[4747]: I1126 13:32:28.276572 4747 generic.go:334] "Generic (PLEG): container finished" podID="da964da5-781d-46be-a7f8-3f0151d77c22" containerID="403fb35d80d5d4fc19415ae8e1cb0287e22f3e807ae40594537ad59ea35ff43b" exitCode=0 Nov 26 13:32:28 crc kubenswrapper[4747]: I1126 13:32:28.276635 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" event={"ID":"da964da5-781d-46be-a7f8-3f0151d77c22","Type":"ContainerDied","Data":"403fb35d80d5d4fc19415ae8e1cb0287e22f3e807ae40594537ad59ea35ff43b"} Nov 26 13:32:29 crc kubenswrapper[4747]: I1126 13:32:29.586145 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:29 crc kubenswrapper[4747]: I1126 13:32:29.720460 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-bundle\") pod \"da964da5-781d-46be-a7f8-3f0151d77c22\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " Nov 26 13:32:29 crc kubenswrapper[4747]: I1126 13:32:29.720562 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-util\") pod \"da964da5-781d-46be-a7f8-3f0151d77c22\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " Nov 26 13:32:29 crc kubenswrapper[4747]: I1126 13:32:29.720745 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxsl6\" (UniqueName: \"kubernetes.io/projected/da964da5-781d-46be-a7f8-3f0151d77c22-kube-api-access-pxsl6\") pod \"da964da5-781d-46be-a7f8-3f0151d77c22\" (UID: \"da964da5-781d-46be-a7f8-3f0151d77c22\") " Nov 26 13:32:29 crc kubenswrapper[4747]: I1126 13:32:29.723167 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-bundle" (OuterVolumeSpecName: "bundle") pod "da964da5-781d-46be-a7f8-3f0151d77c22" (UID: "da964da5-781d-46be-a7f8-3f0151d77c22"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:32:29 crc kubenswrapper[4747]: I1126 13:32:29.732625 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da964da5-781d-46be-a7f8-3f0151d77c22-kube-api-access-pxsl6" (OuterVolumeSpecName: "kube-api-access-pxsl6") pod "da964da5-781d-46be-a7f8-3f0151d77c22" (UID: "da964da5-781d-46be-a7f8-3f0151d77c22"). InnerVolumeSpecName "kube-api-access-pxsl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:32:29 crc kubenswrapper[4747]: I1126 13:32:29.742602 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-util" (OuterVolumeSpecName: "util") pod "da964da5-781d-46be-a7f8-3f0151d77c22" (UID: "da964da5-781d-46be-a7f8-3f0151d77c22"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:32:29 crc kubenswrapper[4747]: I1126 13:32:29.824671 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:32:29 crc kubenswrapper[4747]: I1126 13:32:29.824740 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da964da5-781d-46be-a7f8-3f0151d77c22-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:32:29 crc kubenswrapper[4747]: I1126 13:32:29.824762 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxsl6\" (UniqueName: \"kubernetes.io/projected/da964da5-781d-46be-a7f8-3f0151d77c22-kube-api-access-pxsl6\") on node \"crc\" DevicePath \"\"" Nov 26 13:32:30 crc kubenswrapper[4747]: I1126 13:32:30.295181 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" event={"ID":"da964da5-781d-46be-a7f8-3f0151d77c22","Type":"ContainerDied","Data":"2b47e9fa55e0b20b12a4adb2cf95b6a3698b53df7ca956f53755104da4379911"} Nov 26 13:32:30 crc kubenswrapper[4747]: I1126 13:32:30.295249 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b47e9fa55e0b20b12a4adb2cf95b6a3698b53df7ca956f53755104da4379911" Nov 26 13:32:30 crc kubenswrapper[4747]: I1126 13:32:30.295481 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.340854 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v"] Nov 26 13:32:35 crc kubenswrapper[4747]: E1126 13:32:35.341682 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da964da5-781d-46be-a7f8-3f0151d77c22" containerName="util" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.341699 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="da964da5-781d-46be-a7f8-3f0151d77c22" containerName="util" Nov 26 13:32:35 crc kubenswrapper[4747]: E1126 13:32:35.341725 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da964da5-781d-46be-a7f8-3f0151d77c22" containerName="extract" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.341732 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="da964da5-781d-46be-a7f8-3f0151d77c22" containerName="extract" Nov 26 13:32:35 crc kubenswrapper[4747]: E1126 13:32:35.341745 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da964da5-781d-46be-a7f8-3f0151d77c22" containerName="pull" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.341753 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="da964da5-781d-46be-a7f8-3f0151d77c22" containerName="pull" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.341868 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="da964da5-781d-46be-a7f8-3f0151d77c22" containerName="extract" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.342630 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.344939 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.345080 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-l8mx5" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.360433 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v"] Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.416004 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a1ede6d-b2cb-4d1f-857a-4307ba531806-webhook-cert\") pod \"infra-operator-controller-manager-b4655cf54-jl25v\" (UID: \"8a1ede6d-b2cb-4d1f-857a-4307ba531806\") " pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.416099 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4s2l\" (UniqueName: \"kubernetes.io/projected/8a1ede6d-b2cb-4d1f-857a-4307ba531806-kube-api-access-k4s2l\") pod \"infra-operator-controller-manager-b4655cf54-jl25v\" (UID: \"8a1ede6d-b2cb-4d1f-857a-4307ba531806\") " pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.416182 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a1ede6d-b2cb-4d1f-857a-4307ba531806-apiservice-cert\") pod \"infra-operator-controller-manager-b4655cf54-jl25v\" (UID: \"8a1ede6d-b2cb-4d1f-857a-4307ba531806\") " pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.517776 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a1ede6d-b2cb-4d1f-857a-4307ba531806-apiservice-cert\") pod \"infra-operator-controller-manager-b4655cf54-jl25v\" (UID: \"8a1ede6d-b2cb-4d1f-857a-4307ba531806\") " pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.517861 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a1ede6d-b2cb-4d1f-857a-4307ba531806-webhook-cert\") pod \"infra-operator-controller-manager-b4655cf54-jl25v\" (UID: \"8a1ede6d-b2cb-4d1f-857a-4307ba531806\") " pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.517914 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4s2l\" (UniqueName: \"kubernetes.io/projected/8a1ede6d-b2cb-4d1f-857a-4307ba531806-kube-api-access-k4s2l\") pod \"infra-operator-controller-manager-b4655cf54-jl25v\" (UID: \"8a1ede6d-b2cb-4d1f-857a-4307ba531806\") " pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.525081 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a1ede6d-b2cb-4d1f-857a-4307ba531806-apiservice-cert\") pod \"infra-operator-controller-manager-b4655cf54-jl25v\" (UID: \"8a1ede6d-b2cb-4d1f-857a-4307ba531806\") " pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.526980 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a1ede6d-b2cb-4d1f-857a-4307ba531806-webhook-cert\") pod \"infra-operator-controller-manager-b4655cf54-jl25v\" (UID: \"8a1ede6d-b2cb-4d1f-857a-4307ba531806\") " pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.541716 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4s2l\" (UniqueName: \"kubernetes.io/projected/8a1ede6d-b2cb-4d1f-857a-4307ba531806-kube-api-access-k4s2l\") pod \"infra-operator-controller-manager-b4655cf54-jl25v\" (UID: \"8a1ede6d-b2cb-4d1f-857a-4307ba531806\") " pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:35 crc kubenswrapper[4747]: I1126 13:32:35.659327 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:36 crc kubenswrapper[4747]: I1126 13:32:36.100047 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v"] Nov 26 13:32:36 crc kubenswrapper[4747]: W1126 13:32:36.105624 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a1ede6d_b2cb_4d1f_857a_4307ba531806.slice/crio-5da93119866c7e524fdc81a2e338fea573742e8fc95de5a4a8efd45bc92f196a WatchSource:0}: Error finding container 5da93119866c7e524fdc81a2e338fea573742e8fc95de5a4a8efd45bc92f196a: Status 404 returned error can't find the container with id 5da93119866c7e524fdc81a2e338fea573742e8fc95de5a4a8efd45bc92f196a Nov 26 13:32:36 crc kubenswrapper[4747]: I1126 13:32:36.356257 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" event={"ID":"8a1ede6d-b2cb-4d1f-857a-4307ba531806","Type":"ContainerStarted","Data":"5da93119866c7e524fdc81a2e338fea573742e8fc95de5a4a8efd45bc92f196a"} Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.899542 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.901523 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.904640 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openshift-service-ca.crt" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.904845 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config-data" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.906599 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"galera-openstack-dockercfg-dpnk8" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.906738 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.910467 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"kube-root-ca.crt" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.910538 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.915253 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.916371 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.924248 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.926984 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.927396 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.940124 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.947911 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09efed01-5e87-440c-aafe-bc16617e8bfd-config-data-default\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.947986 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09efed01-5e87-440c-aafe-bc16617e8bfd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.948018 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82lbr\" (UniqueName: \"kubernetes.io/projected/09efed01-5e87-440c-aafe-bc16617e8bfd-kube-api-access-82lbr\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.948049 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09efed01-5e87-440c-aafe-bc16617e8bfd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.948127 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:37 crc kubenswrapper[4747]: I1126 13:32:37.948147 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09efed01-5e87-440c-aafe-bc16617e8bfd-kolla-config\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.050618 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-config-data-generated\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.050662 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-operator-scripts\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.050691 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09efed01-5e87-440c-aafe-bc16617e8bfd-config-data-default\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.050710 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09efed01-5e87-440c-aafe-bc16617e8bfd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.050726 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82lbr\" (UniqueName: \"kubernetes.io/projected/09efed01-5e87-440c-aafe-bc16617e8bfd-kube-api-access-82lbr\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.050745 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/298454c7-93bf-41be-877e-9f3e27f47119-kolla-config\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.050830 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qn9\" (UniqueName: \"kubernetes.io/projected/298454c7-93bf-41be-877e-9f3e27f47119-kube-api-access-r4qn9\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052214 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09efed01-5e87-440c-aafe-bc16617e8bfd-config-data-default\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052293 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/298454c7-93bf-41be-877e-9f3e27f47119-config-data-generated\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052344 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09efed01-5e87-440c-aafe-bc16617e8bfd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052404 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-config-data-default\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052481 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052501 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09efed01-5e87-440c-aafe-bc16617e8bfd-kolla-config\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052523 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/298454c7-93bf-41be-877e-9f3e27f47119-operator-scripts\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052547 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/298454c7-93bf-41be-877e-9f3e27f47119-config-data-default\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052646 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mwv8\" (UniqueName: \"kubernetes.io/projected/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-kube-api-access-7mwv8\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052680 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-kolla-config\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052705 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.052760 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.055211 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09efed01-5e87-440c-aafe-bc16617e8bfd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.055691 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") device mount path \"/mnt/openstack/pv20\"" pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.058926 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09efed01-5e87-440c-aafe-bc16617e8bfd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.062742 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09efed01-5e87-440c-aafe-bc16617e8bfd-kolla-config\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.072328 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82lbr\" (UniqueName: \"kubernetes.io/projected/09efed01-5e87-440c-aafe-bc16617e8bfd-kube-api-access-82lbr\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.081433 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"openstack-galera-0\" (UID: \"09efed01-5e87-440c-aafe-bc16617e8bfd\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154159 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154202 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-config-data-generated\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154220 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-operator-scripts\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154257 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4qn9\" (UniqueName: \"kubernetes.io/projected/298454c7-93bf-41be-877e-9f3e27f47119-kube-api-access-r4qn9\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154275 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/298454c7-93bf-41be-877e-9f3e27f47119-kolla-config\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154294 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/298454c7-93bf-41be-877e-9f3e27f47119-config-data-generated\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154326 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-config-data-default\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154355 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/298454c7-93bf-41be-877e-9f3e27f47119-operator-scripts\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154370 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/298454c7-93bf-41be-877e-9f3e27f47119-config-data-default\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154404 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mwv8\" (UniqueName: \"kubernetes.io/projected/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-kube-api-access-7mwv8\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154428 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-kolla-config\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154444 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154586 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-config-data-generated\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154712 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.154813 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") device mount path \"/mnt/openstack/pv15\"" pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.155701 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/298454c7-93bf-41be-877e-9f3e27f47119-config-data-default\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.156142 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-config-data-default\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.156527 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/298454c7-93bf-41be-877e-9f3e27f47119-config-data-generated\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.156659 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-operator-scripts\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.156900 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/298454c7-93bf-41be-877e-9f3e27f47119-kolla-config\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.156900 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-kolla-config\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.157574 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/298454c7-93bf-41be-877e-9f3e27f47119-operator-scripts\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.173200 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.173259 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4qn9\" (UniqueName: \"kubernetes.io/projected/298454c7-93bf-41be-877e-9f3e27f47119-kube-api-access-r4qn9\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.173492 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"openstack-galera-2\" (UID: \"298454c7-93bf-41be-877e-9f3e27f47119\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.180726 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mwv8\" (UniqueName: \"kubernetes.io/projected/c052d6ba-7cf0-4d98-8c38-be1d7afccafa-kube-api-access-7mwv8\") pod \"openstack-galera-1\" (UID: \"c052d6ba-7cf0-4d98-8c38-be1d7afccafa\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.224953 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.240272 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.264470 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.718295 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Nov 26 13:32:38 crc kubenswrapper[4747]: W1126 13:32:38.721677 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod298454c7_93bf_41be_877e_9f3e27f47119.slice/crio-a08283482cf6602dd8a524c920db84b4c62f8a282914b52d6e6a964629f08262 WatchSource:0}: Error finding container a08283482cf6602dd8a524c920db84b4c62f8a282914b52d6e6a964629f08262: Status 404 returned error can't find the container with id a08283482cf6602dd8a524c920db84b4c62f8a282914b52d6e6a964629f08262 Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.728818 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Nov 26 13:32:38 crc kubenswrapper[4747]: W1126 13:32:38.729921 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09efed01_5e87_440c_aafe_bc16617e8bfd.slice/crio-bc3681c635157e3699ba8c9f8212dc140db319ada26cb1bd828560b6ef18ea9b WatchSource:0}: Error finding container bc3681c635157e3699ba8c9f8212dc140db319ada26cb1bd828560b6ef18ea9b: Status 404 returned error can't find the container with id bc3681c635157e3699ba8c9f8212dc140db319ada26cb1bd828560b6ef18ea9b Nov 26 13:32:38 crc kubenswrapper[4747]: I1126 13:32:38.753339 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Nov 26 13:32:39 crc kubenswrapper[4747]: I1126 13:32:39.375334 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"298454c7-93bf-41be-877e-9f3e27f47119","Type":"ContainerStarted","Data":"a08283482cf6602dd8a524c920db84b4c62f8a282914b52d6e6a964629f08262"} Nov 26 13:32:39 crc kubenswrapper[4747]: I1126 13:32:39.377131 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"c052d6ba-7cf0-4d98-8c38-be1d7afccafa","Type":"ContainerStarted","Data":"d161b5907d39eca97803f3650124e1b43cfafa199311b4a42061fb01d89bb962"} Nov 26 13:32:39 crc kubenswrapper[4747]: I1126 13:32:39.378035 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"09efed01-5e87-440c-aafe-bc16617e8bfd","Type":"ContainerStarted","Data":"bc3681c635157e3699ba8c9f8212dc140db319ada26cb1bd828560b6ef18ea9b"} Nov 26 13:32:42 crc kubenswrapper[4747]: I1126 13:32:42.399649 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" event={"ID":"8a1ede6d-b2cb-4d1f-857a-4307ba531806","Type":"ContainerStarted","Data":"88ffa8189b9b620fd05aca83136e52b7d0ca3ebf3854f9fb423826171b0cf389"} Nov 26 13:32:50 crc kubenswrapper[4747]: I1126 13:32:50.462864 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"09efed01-5e87-440c-aafe-bc16617e8bfd","Type":"ContainerStarted","Data":"64d672c23b7e3190bf5f8525f568baaf4651d24fba6f0fc558173f44fa1a6176"} Nov 26 13:32:51 crc kubenswrapper[4747]: I1126 13:32:51.469885 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"c052d6ba-7cf0-4d98-8c38-be1d7afccafa","Type":"ContainerStarted","Data":"3110bbf6ad64c1ccb42552207a287c3487f2cea4f3c7f832933cb61c1db2f695"} Nov 26 13:32:54 crc kubenswrapper[4747]: I1126 13:32:54.507840 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"298454c7-93bf-41be-877e-9f3e27f47119","Type":"ContainerStarted","Data":"3a9c35e7ee98dc40ed6a06dd97efdef464089e40458e4da610692133d906c9da"} Nov 26 13:32:55 crc kubenswrapper[4747]: I1126 13:32:55.518788 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" event={"ID":"8a1ede6d-b2cb-4d1f-857a-4307ba531806","Type":"ContainerStarted","Data":"c5dff19e2531c5e373c4d6b8f8e2c851e6b516f26356d5eae05f6f4f838cc447"} Nov 26 13:32:55 crc kubenswrapper[4747]: I1126 13:32:55.519270 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:55 crc kubenswrapper[4747]: I1126 13:32:55.526026 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" Nov 26 13:32:55 crc kubenswrapper[4747]: I1126 13:32:55.545640 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-b4655cf54-jl25v" podStartSLOduration=1.665271949 podStartE2EDuration="20.545609855s" podCreationTimestamp="2025-11-26 13:32:35 +0000 UTC" firstStartedPulling="2025-11-26 13:32:36.108275178 +0000 UTC m=+1043.094586213" lastFinishedPulling="2025-11-26 13:32:54.988613104 +0000 UTC m=+1061.974924119" observedRunningTime="2025-11-26 13:32:55.541953664 +0000 UTC m=+1062.528264689" watchObservedRunningTime="2025-11-26 13:32:55.545609855 +0000 UTC m=+1062.531920900" Nov 26 13:32:56 crc kubenswrapper[4747]: I1126 13:32:56.533416 4747 generic.go:334] "Generic (PLEG): container finished" podID="c052d6ba-7cf0-4d98-8c38-be1d7afccafa" containerID="3110bbf6ad64c1ccb42552207a287c3487f2cea4f3c7f832933cb61c1db2f695" exitCode=0 Nov 26 13:32:56 crc kubenswrapper[4747]: I1126 13:32:56.533490 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"c052d6ba-7cf0-4d98-8c38-be1d7afccafa","Type":"ContainerDied","Data":"3110bbf6ad64c1ccb42552207a287c3487f2cea4f3c7f832933cb61c1db2f695"} Nov 26 13:32:56 crc kubenswrapper[4747]: I1126 13:32:56.535972 4747 generic.go:334] "Generic (PLEG): container finished" podID="09efed01-5e87-440c-aafe-bc16617e8bfd" containerID="64d672c23b7e3190bf5f8525f568baaf4651d24fba6f0fc558173f44fa1a6176" exitCode=0 Nov 26 13:32:56 crc kubenswrapper[4747]: I1126 13:32:56.536091 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"09efed01-5e87-440c-aafe-bc16617e8bfd","Type":"ContainerDied","Data":"64d672c23b7e3190bf5f8525f568baaf4651d24fba6f0fc558173f44fa1a6176"} Nov 26 13:32:57 crc kubenswrapper[4747]: I1126 13:32:57.548149 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"09efed01-5e87-440c-aafe-bc16617e8bfd","Type":"ContainerStarted","Data":"6f69d6e934c4fba7fe7dfddb2b93e6a787279720872af889d197280b54be1b27"} Nov 26 13:32:57 crc kubenswrapper[4747]: I1126 13:32:57.550385 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"c052d6ba-7cf0-4d98-8c38-be1d7afccafa","Type":"ContainerStarted","Data":"df3dbfd294906d4ea75719abcad422b3c37db8ad4c12aafb776eea6010fa695d"} Nov 26 13:32:57 crc kubenswrapper[4747]: I1126 13:32:57.575770 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-0" podStartSLOduration=10.077806163 podStartE2EDuration="21.575750026s" podCreationTimestamp="2025-11-26 13:32:36 +0000 UTC" firstStartedPulling="2025-11-26 13:32:38.73354474 +0000 UTC m=+1045.719855765" lastFinishedPulling="2025-11-26 13:32:50.231488593 +0000 UTC m=+1057.217799628" observedRunningTime="2025-11-26 13:32:57.573529211 +0000 UTC m=+1064.559840236" watchObservedRunningTime="2025-11-26 13:32:57.575750026 +0000 UTC m=+1064.562061061" Nov 26 13:32:57 crc kubenswrapper[4747]: I1126 13:32:57.594486 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-1" podStartSLOduration=9.948381635 podStartE2EDuration="21.594464343s" podCreationTimestamp="2025-11-26 13:32:36 +0000 UTC" firstStartedPulling="2025-11-26 13:32:38.743122889 +0000 UTC m=+1045.729433924" lastFinishedPulling="2025-11-26 13:32:50.389205607 +0000 UTC m=+1057.375516632" observedRunningTime="2025-11-26 13:32:57.591806977 +0000 UTC m=+1064.578118012" watchObservedRunningTime="2025-11-26 13:32:57.594464343 +0000 UTC m=+1064.580775368" Nov 26 13:32:58 crc kubenswrapper[4747]: E1126 13:32:58.006001 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod298454c7_93bf_41be_877e_9f3e27f47119.slice/crio-3a9c35e7ee98dc40ed6a06dd97efdef464089e40458e4da610692133d906c9da.scope\": RecentStats: unable to find data in memory cache]" Nov 26 13:32:58 crc kubenswrapper[4747]: I1126 13:32:58.225905 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:58 crc kubenswrapper[4747]: I1126 13:32:58.226336 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:32:58 crc kubenswrapper[4747]: I1126 13:32:58.240707 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:58 crc kubenswrapper[4747]: I1126 13:32:58.240752 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:32:58 crc kubenswrapper[4747]: I1126 13:32:58.557854 4747 generic.go:334] "Generic (PLEG): container finished" podID="298454c7-93bf-41be-877e-9f3e27f47119" containerID="3a9c35e7ee98dc40ed6a06dd97efdef464089e40458e4da610692133d906c9da" exitCode=0 Nov 26 13:32:58 crc kubenswrapper[4747]: I1126 13:32:58.558503 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"298454c7-93bf-41be-877e-9f3e27f47119","Type":"ContainerDied","Data":"3a9c35e7ee98dc40ed6a06dd97efdef464089e40458e4da610692133d906c9da"} Nov 26 13:32:59 crc kubenswrapper[4747]: I1126 13:32:59.564505 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"298454c7-93bf-41be-877e-9f3e27f47119","Type":"ContainerStarted","Data":"0c725d3f49ebdf5e2786c75dbb7c11ef68740ff9c18d22b243df2c205807cbba"} Nov 26 13:32:59 crc kubenswrapper[4747]: I1126 13:32:59.583198 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-2" podStartSLOduration=10.74693578 podStartE2EDuration="23.58317873s" podCreationTimestamp="2025-11-26 13:32:36 +0000 UTC" firstStartedPulling="2025-11-26 13:32:38.725728875 +0000 UTC m=+1045.712039890" lastFinishedPulling="2025-11-26 13:32:51.561971815 +0000 UTC m=+1058.548282840" observedRunningTime="2025-11-26 13:32:59.580500543 +0000 UTC m=+1066.566811558" watchObservedRunningTime="2025-11-26 13:32:59.58317873 +0000 UTC m=+1066.569489745" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.067838 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/memcached-0"] Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.072929 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.076692 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"memcached-memcached-dockercfg-48vzq" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.077266 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"memcached-config-data" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.118338 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.180529 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa5963aa-3702-4432-a430-e6716768ed8c-kolla-config\") pod \"memcached-0\" (UID: \"fa5963aa-3702-4432-a430-e6716768ed8c\") " pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.180594 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa5963aa-3702-4432-a430-e6716768ed8c-config-data\") pod \"memcached-0\" (UID: \"fa5963aa-3702-4432-a430-e6716768ed8c\") " pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.180818 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9nrc\" (UniqueName: \"kubernetes.io/projected/fa5963aa-3702-4432-a430-e6716768ed8c-kube-api-access-m9nrc\") pod \"memcached-0\" (UID: \"fa5963aa-3702-4432-a430-e6716768ed8c\") " pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.282849 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9nrc\" (UniqueName: \"kubernetes.io/projected/fa5963aa-3702-4432-a430-e6716768ed8c-kube-api-access-m9nrc\") pod \"memcached-0\" (UID: \"fa5963aa-3702-4432-a430-e6716768ed8c\") " pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.282952 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa5963aa-3702-4432-a430-e6716768ed8c-kolla-config\") pod \"memcached-0\" (UID: \"fa5963aa-3702-4432-a430-e6716768ed8c\") " pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.282993 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa5963aa-3702-4432-a430-e6716768ed8c-config-data\") pod \"memcached-0\" (UID: \"fa5963aa-3702-4432-a430-e6716768ed8c\") " pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.284187 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa5963aa-3702-4432-a430-e6716768ed8c-config-data\") pod \"memcached-0\" (UID: \"fa5963aa-3702-4432-a430-e6716768ed8c\") " pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.284314 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa5963aa-3702-4432-a430-e6716768ed8c-kolla-config\") pod \"memcached-0\" (UID: \"fa5963aa-3702-4432-a430-e6716768ed8c\") " pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.314165 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9nrc\" (UniqueName: \"kubernetes.io/projected/fa5963aa-3702-4432-a430-e6716768ed8c-kube-api-access-m9nrc\") pod \"memcached-0\" (UID: \"fa5963aa-3702-4432-a430-e6716768ed8c\") " pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.388813 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:00 crc kubenswrapper[4747]: I1126 13:33:00.764173 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Nov 26 13:33:00 crc kubenswrapper[4747]: W1126 13:33:00.767844 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa5963aa_3702_4432_a430_e6716768ed8c.slice/crio-c94157d19f1d277048075222c43a3044fa91faf4c70e5fb60f43cd3da6e68146 WatchSource:0}: Error finding container c94157d19f1d277048075222c43a3044fa91faf4c70e5fb60f43cd3da6e68146: Status 404 returned error can't find the container with id c94157d19f1d277048075222c43a3044fa91faf4c70e5fb60f43cd3da6e68146 Nov 26 13:33:01 crc kubenswrapper[4747]: I1126 13:33:01.026875 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5rtdn"] Nov 26 13:33:01 crc kubenswrapper[4747]: I1126 13:33:01.027744 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" Nov 26 13:33:01 crc kubenswrapper[4747]: I1126 13:33:01.030856 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-cfjkh" Nov 26 13:33:01 crc kubenswrapper[4747]: I1126 13:33:01.044450 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5rtdn"] Nov 26 13:33:01 crc kubenswrapper[4747]: I1126 13:33:01.095510 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9dhr\" (UniqueName: \"kubernetes.io/projected/04fe6ff8-223c-4d9b-9f59-661e04562b9b-kube-api-access-h9dhr\") pod \"rabbitmq-cluster-operator-index-5rtdn\" (UID: \"04fe6ff8-223c-4d9b-9f59-661e04562b9b\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" Nov 26 13:33:01 crc kubenswrapper[4747]: I1126 13:33:01.197256 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9dhr\" (UniqueName: \"kubernetes.io/projected/04fe6ff8-223c-4d9b-9f59-661e04562b9b-kube-api-access-h9dhr\") pod \"rabbitmq-cluster-operator-index-5rtdn\" (UID: \"04fe6ff8-223c-4d9b-9f59-661e04562b9b\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" Nov 26 13:33:01 crc kubenswrapper[4747]: I1126 13:33:01.217732 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9dhr\" (UniqueName: \"kubernetes.io/projected/04fe6ff8-223c-4d9b-9f59-661e04562b9b-kube-api-access-h9dhr\") pod \"rabbitmq-cluster-operator-index-5rtdn\" (UID: \"04fe6ff8-223c-4d9b-9f59-661e04562b9b\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" Nov 26 13:33:01 crc kubenswrapper[4747]: I1126 13:33:01.342048 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" Nov 26 13:33:01 crc kubenswrapper[4747]: I1126 13:33:01.584968 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"fa5963aa-3702-4432-a430-e6716768ed8c","Type":"ContainerStarted","Data":"c94157d19f1d277048075222c43a3044fa91faf4c70e5fb60f43cd3da6e68146"} Nov 26 13:33:01 crc kubenswrapper[4747]: I1126 13:33:01.893892 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5rtdn"] Nov 26 13:33:02 crc kubenswrapper[4747]: I1126 13:33:02.598783 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" event={"ID":"04fe6ff8-223c-4d9b-9f59-661e04562b9b","Type":"ContainerStarted","Data":"d88ad34bdb92406100fb6b6e49eb46664c19d5ccf879be2e7c513fdf6074410f"} Nov 26 13:33:04 crc kubenswrapper[4747]: I1126 13:33:04.615353 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"fa5963aa-3702-4432-a430-e6716768ed8c","Type":"ContainerStarted","Data":"6ec1437eb1f69734ebfc65ed5a4d06b0915f213492d36961cd96bf499a124229"} Nov 26 13:33:04 crc kubenswrapper[4747]: I1126 13:33:04.616643 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:04 crc kubenswrapper[4747]: I1126 13:33:04.635099 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/memcached-0" podStartSLOduration=1.630808587 podStartE2EDuration="4.635078472s" podCreationTimestamp="2025-11-26 13:33:00 +0000 UTC" firstStartedPulling="2025-11-26 13:33:00.769762383 +0000 UTC m=+1067.756073398" lastFinishedPulling="2025-11-26 13:33:03.774032268 +0000 UTC m=+1070.760343283" observedRunningTime="2025-11-26 13:33:04.632870937 +0000 UTC m=+1071.619181962" watchObservedRunningTime="2025-11-26 13:33:04.635078472 +0000 UTC m=+1071.621389497" Nov 26 13:33:05 crc kubenswrapper[4747]: I1126 13:33:05.210497 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5rtdn"] Nov 26 13:33:05 crc kubenswrapper[4747]: I1126 13:33:05.824943 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nw29v"] Nov 26 13:33:05 crc kubenswrapper[4747]: I1126 13:33:05.826387 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" Nov 26 13:33:05 crc kubenswrapper[4747]: I1126 13:33:05.830861 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nw29v"] Nov 26 13:33:05 crc kubenswrapper[4747]: I1126 13:33:05.867260 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659wl\" (UniqueName: \"kubernetes.io/projected/0cd030c4-2860-4ff1-a643-17a38f5cb419-kube-api-access-659wl\") pod \"rabbitmq-cluster-operator-index-nw29v\" (UID: \"0cd030c4-2860-4ff1-a643-17a38f5cb419\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" Nov 26 13:33:05 crc kubenswrapper[4747]: I1126 13:33:05.969715 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-659wl\" (UniqueName: \"kubernetes.io/projected/0cd030c4-2860-4ff1-a643-17a38f5cb419-kube-api-access-659wl\") pod \"rabbitmq-cluster-operator-index-nw29v\" (UID: \"0cd030c4-2860-4ff1-a643-17a38f5cb419\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" Nov 26 13:33:06 crc kubenswrapper[4747]: I1126 13:33:06.003122 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-659wl\" (UniqueName: \"kubernetes.io/projected/0cd030c4-2860-4ff1-a643-17a38f5cb419-kube-api-access-659wl\") pod \"rabbitmq-cluster-operator-index-nw29v\" (UID: \"0cd030c4-2860-4ff1-a643-17a38f5cb419\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" Nov 26 13:33:06 crc kubenswrapper[4747]: I1126 13:33:06.147832 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" Nov 26 13:33:06 crc kubenswrapper[4747]: I1126 13:33:06.629448 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" podUID="04fe6ff8-223c-4d9b-9f59-661e04562b9b" containerName="registry-server" containerID="cri-o://7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184" gracePeriod=2 Nov 26 13:33:06 crc kubenswrapper[4747]: I1126 13:33:06.629780 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" event={"ID":"04fe6ff8-223c-4d9b-9f59-661e04562b9b","Type":"ContainerStarted","Data":"7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184"} Nov 26 13:33:06 crc kubenswrapper[4747]: I1126 13:33:06.647527 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" podStartSLOduration=1.298317274 podStartE2EDuration="5.647508671s" podCreationTimestamp="2025-11-26 13:33:01 +0000 UTC" firstStartedPulling="2025-11-26 13:33:01.916325458 +0000 UTC m=+1068.902636473" lastFinishedPulling="2025-11-26 13:33:06.265516855 +0000 UTC m=+1073.251827870" observedRunningTime="2025-11-26 13:33:06.645631764 +0000 UTC m=+1073.631942779" watchObservedRunningTime="2025-11-26 13:33:06.647508671 +0000 UTC m=+1073.633819686" Nov 26 13:33:06 crc kubenswrapper[4747]: I1126 13:33:06.673893 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nw29v"] Nov 26 13:33:06 crc kubenswrapper[4747]: W1126 13:33:06.729701 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cd030c4_2860_4ff1_a643_17a38f5cb419.slice/crio-e83e3073f397d6224f7d051b58cd774301014d6fdd5d5dd4ea733809e3176312 WatchSource:0}: Error finding container e83e3073f397d6224f7d051b58cd774301014d6fdd5d5dd4ea733809e3176312: Status 404 returned error can't find the container with id e83e3073f397d6224f7d051b58cd774301014d6fdd5d5dd4ea733809e3176312 Nov 26 13:33:06 crc kubenswrapper[4747]: I1126 13:33:06.958587 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" Nov 26 13:33:06 crc kubenswrapper[4747]: I1126 13:33:06.983485 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9dhr\" (UniqueName: \"kubernetes.io/projected/04fe6ff8-223c-4d9b-9f59-661e04562b9b-kube-api-access-h9dhr\") pod \"04fe6ff8-223c-4d9b-9f59-661e04562b9b\" (UID: \"04fe6ff8-223c-4d9b-9f59-661e04562b9b\") " Nov 26 13:33:06 crc kubenswrapper[4747]: I1126 13:33:06.990486 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04fe6ff8-223c-4d9b-9f59-661e04562b9b-kube-api-access-h9dhr" (OuterVolumeSpecName: "kube-api-access-h9dhr") pod "04fe6ff8-223c-4d9b-9f59-661e04562b9b" (UID: "04fe6ff8-223c-4d9b-9f59-661e04562b9b"). InnerVolumeSpecName "kube-api-access-h9dhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.084789 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9dhr\" (UniqueName: \"kubernetes.io/projected/04fe6ff8-223c-4d9b-9f59-661e04562b9b-kube-api-access-h9dhr\") on node \"crc\" DevicePath \"\"" Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.636277 4747 generic.go:334] "Generic (PLEG): container finished" podID="04fe6ff8-223c-4d9b-9f59-661e04562b9b" containerID="7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184" exitCode=0 Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.636334 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" event={"ID":"04fe6ff8-223c-4d9b-9f59-661e04562b9b","Type":"ContainerDied","Data":"7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184"} Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.636625 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" event={"ID":"04fe6ff8-223c-4d9b-9f59-661e04562b9b","Type":"ContainerDied","Data":"d88ad34bdb92406100fb6b6e49eb46664c19d5ccf879be2e7c513fdf6074410f"} Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.636347 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5rtdn" Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.636656 4747 scope.go:117] "RemoveContainer" containerID="7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184" Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.638565 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" event={"ID":"0cd030c4-2860-4ff1-a643-17a38f5cb419","Type":"ContainerStarted","Data":"70bd3309701062eed633273036c519c131468dcbbd7cff7c2e6fa6b21ab2c770"} Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.638610 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" event={"ID":"0cd030c4-2860-4ff1-a643-17a38f5cb419","Type":"ContainerStarted","Data":"e83e3073f397d6224f7d051b58cd774301014d6fdd5d5dd4ea733809e3176312"} Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.655760 4747 scope.go:117] "RemoveContainer" containerID="7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184" Nov 26 13:33:07 crc kubenswrapper[4747]: E1126 13:33:07.656224 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184\": container with ID starting with 7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184 not found: ID does not exist" containerID="7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184" Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.656272 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184"} err="failed to get container status \"7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184\": rpc error: code = NotFound desc = could not find container \"7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184\": container with ID starting with 7605dcc5cc8d15147946eedfe939cd23575a6299ec704484ea716d0493a09184 not found: ID does not exist" Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.669226 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" podStartSLOduration=2.045113197 podStartE2EDuration="2.669198422s" podCreationTimestamp="2025-11-26 13:33:05 +0000 UTC" firstStartedPulling="2025-11-26 13:33:06.733873705 +0000 UTC m=+1073.720184720" lastFinishedPulling="2025-11-26 13:33:07.3579589 +0000 UTC m=+1074.344269945" observedRunningTime="2025-11-26 13:33:07.661398687 +0000 UTC m=+1074.647709712" watchObservedRunningTime="2025-11-26 13:33:07.669198422 +0000 UTC m=+1074.655509467" Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.677568 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5rtdn"] Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.687435 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5rtdn"] Nov 26 13:33:07 crc kubenswrapper[4747]: I1126 13:33:07.815191 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04fe6ff8-223c-4d9b-9f59-661e04562b9b" path="/var/lib/kubelet/pods/04fe6ff8-223c-4d9b-9f59-661e04562b9b/volumes" Nov 26 13:33:08 crc kubenswrapper[4747]: I1126 13:33:08.268112 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:33:08 crc kubenswrapper[4747]: I1126 13:33:08.268880 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:33:08 crc kubenswrapper[4747]: I1126 13:33:08.339436 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:33:08 crc kubenswrapper[4747]: I1126 13:33:08.703799 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-2" Nov 26 13:33:10 crc kubenswrapper[4747]: I1126 13:33:10.390050 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/memcached-0" Nov 26 13:33:16 crc kubenswrapper[4747]: I1126 13:33:16.148625 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" Nov 26 13:33:16 crc kubenswrapper[4747]: I1126 13:33:16.149409 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" Nov 26 13:33:16 crc kubenswrapper[4747]: I1126 13:33:16.198127 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" Nov 26 13:33:16 crc kubenswrapper[4747]: I1126 13:33:16.725766 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-nw29v" Nov 26 13:33:18 crc kubenswrapper[4747]: I1126 13:33:18.356618 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/openstack-galera-2" podUID="298454c7-93bf-41be-877e-9f3e27f47119" containerName="galera" probeResult="failure" output=< Nov 26 13:33:18 crc kubenswrapper[4747]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Nov 26 13:33:18 crc kubenswrapper[4747]: > Nov 26 13:33:22 crc kubenswrapper[4747]: I1126 13:33:22.945508 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:33:23 crc kubenswrapper[4747]: I1126 13:33:23.051143 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-0" Nov 26 13:33:23 crc kubenswrapper[4747]: I1126 13:33:23.895264 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:33:23 crc kubenswrapper[4747]: I1126 13:33:23.954549 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-1" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.656121 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt"] Nov 26 13:33:24 crc kubenswrapper[4747]: E1126 13:33:24.656394 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fe6ff8-223c-4d9b-9f59-661e04562b9b" containerName="registry-server" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.656414 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fe6ff8-223c-4d9b-9f59-661e04562b9b" containerName="registry-server" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.656570 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="04fe6ff8-223c-4d9b-9f59-661e04562b9b" containerName="registry-server" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.657383 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.659460 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rm7vv" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.666199 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt"] Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.745026 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.745260 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.745462 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdx58\" (UniqueName: \"kubernetes.io/projected/11e289a8-87c2-4f73-920c-da94be9d0642-kube-api-access-tdx58\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.847294 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.847478 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdx58\" (UniqueName: \"kubernetes.io/projected/11e289a8-87c2-4f73-920c-da94be9d0642-kube-api-access-tdx58\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.847558 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.848268 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.848276 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.871801 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdx58\" (UniqueName: \"kubernetes.io/projected/11e289a8-87c2-4f73-920c-da94be9d0642-kube-api-access-tdx58\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:24 crc kubenswrapper[4747]: I1126 13:33:24.980365 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:25 crc kubenswrapper[4747]: I1126 13:33:25.420978 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt"] Nov 26 13:33:25 crc kubenswrapper[4747]: W1126 13:33:25.425774 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e289a8_87c2_4f73_920c_da94be9d0642.slice/crio-46440bb4fb87264a9264fbe5d0ed9cc4465a0613dcb0ff65bcea45b339114103 WatchSource:0}: Error finding container 46440bb4fb87264a9264fbe5d0ed9cc4465a0613dcb0ff65bcea45b339114103: Status 404 returned error can't find the container with id 46440bb4fb87264a9264fbe5d0ed9cc4465a0613dcb0ff65bcea45b339114103 Nov 26 13:33:25 crc kubenswrapper[4747]: I1126 13:33:25.768336 4747 generic.go:334] "Generic (PLEG): container finished" podID="11e289a8-87c2-4f73-920c-da94be9d0642" containerID="d24bad7d671c97f140e441367f71b32f9e2c66c07cae721a6333d1a8e28f9ab4" exitCode=0 Nov 26 13:33:25 crc kubenswrapper[4747]: I1126 13:33:25.768400 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" event={"ID":"11e289a8-87c2-4f73-920c-da94be9d0642","Type":"ContainerDied","Data":"d24bad7d671c97f140e441367f71b32f9e2c66c07cae721a6333d1a8e28f9ab4"} Nov 26 13:33:25 crc kubenswrapper[4747]: I1126 13:33:25.768438 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" event={"ID":"11e289a8-87c2-4f73-920c-da94be9d0642","Type":"ContainerStarted","Data":"46440bb4fb87264a9264fbe5d0ed9cc4465a0613dcb0ff65bcea45b339114103"} Nov 26 13:33:26 crc kubenswrapper[4747]: I1126 13:33:26.778431 4747 generic.go:334] "Generic (PLEG): container finished" podID="11e289a8-87c2-4f73-920c-da94be9d0642" containerID="549e981a5c604cbdc813fbbff235aacc83bfe42b473ee5616b24d2817c89f304" exitCode=0 Nov 26 13:33:26 crc kubenswrapper[4747]: I1126 13:33:26.778503 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" event={"ID":"11e289a8-87c2-4f73-920c-da94be9d0642","Type":"ContainerDied","Data":"549e981a5c604cbdc813fbbff235aacc83bfe42b473ee5616b24d2817c89f304"} Nov 26 13:33:27 crc kubenswrapper[4747]: I1126 13:33:27.789872 4747 generic.go:334] "Generic (PLEG): container finished" podID="11e289a8-87c2-4f73-920c-da94be9d0642" containerID="e1bae26d4618cc4ecd96740767c973201b018e87e43ed3dfa5f973a680cd32d6" exitCode=0 Nov 26 13:33:27 crc kubenswrapper[4747]: I1126 13:33:27.789924 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" event={"ID":"11e289a8-87c2-4f73-920c-da94be9d0642","Type":"ContainerDied","Data":"e1bae26d4618cc4ecd96740767c973201b018e87e43ed3dfa5f973a680cd32d6"} Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.069137 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.204843 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-util\") pod \"11e289a8-87c2-4f73-920c-da94be9d0642\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.204959 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-bundle\") pod \"11e289a8-87c2-4f73-920c-da94be9d0642\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.205116 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdx58\" (UniqueName: \"kubernetes.io/projected/11e289a8-87c2-4f73-920c-da94be9d0642-kube-api-access-tdx58\") pod \"11e289a8-87c2-4f73-920c-da94be9d0642\" (UID: \"11e289a8-87c2-4f73-920c-da94be9d0642\") " Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.205731 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-bundle" (OuterVolumeSpecName: "bundle") pod "11e289a8-87c2-4f73-920c-da94be9d0642" (UID: "11e289a8-87c2-4f73-920c-da94be9d0642"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.213581 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e289a8-87c2-4f73-920c-da94be9d0642-kube-api-access-tdx58" (OuterVolumeSpecName: "kube-api-access-tdx58") pod "11e289a8-87c2-4f73-920c-da94be9d0642" (UID: "11e289a8-87c2-4f73-920c-da94be9d0642"). InnerVolumeSpecName "kube-api-access-tdx58". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.219039 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-util" (OuterVolumeSpecName: "util") pod "11e289a8-87c2-4f73-920c-da94be9d0642" (UID: "11e289a8-87c2-4f73-920c-da94be9d0642"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.307523 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.307563 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdx58\" (UniqueName: \"kubernetes.io/projected/11e289a8-87c2-4f73-920c-da94be9d0642-kube-api-access-tdx58\") on node \"crc\" DevicePath \"\"" Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.307578 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e289a8-87c2-4f73-920c-da94be9d0642-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.806940 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.807243 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt" event={"ID":"11e289a8-87c2-4f73-920c-da94be9d0642","Type":"ContainerDied","Data":"46440bb4fb87264a9264fbe5d0ed9cc4465a0613dcb0ff65bcea45b339114103"} Nov 26 13:33:29 crc kubenswrapper[4747]: I1126 13:33:29.807288 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46440bb4fb87264a9264fbe5d0ed9cc4465a0613dcb0ff65bcea45b339114103" Nov 26 13:33:33 crc kubenswrapper[4747]: I1126 13:33:33.417192 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:33:33 crc kubenswrapper[4747]: I1126 13:33:33.417527 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.589537 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4"] Nov 26 13:33:37 crc kubenswrapper[4747]: E1126 13:33:37.590408 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e289a8-87c2-4f73-920c-da94be9d0642" containerName="extract" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.590427 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e289a8-87c2-4f73-920c-da94be9d0642" containerName="extract" Nov 26 13:33:37 crc kubenswrapper[4747]: E1126 13:33:37.590445 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e289a8-87c2-4f73-920c-da94be9d0642" containerName="util" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.590453 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e289a8-87c2-4f73-920c-da94be9d0642" containerName="util" Nov 26 13:33:37 crc kubenswrapper[4747]: E1126 13:33:37.590466 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e289a8-87c2-4f73-920c-da94be9d0642" containerName="pull" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.590473 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e289a8-87c2-4f73-920c-da94be9d0642" containerName="pull" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.590615 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e289a8-87c2-4f73-920c-da94be9d0642" containerName="extract" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.591178 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.593580 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-28kmv" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.604357 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4"] Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.729403 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmvp9\" (UniqueName: \"kubernetes.io/projected/af5d51b6-54ab-4dd7-a34f-74398364c04c-kube-api-access-vmvp9\") pod \"rabbitmq-cluster-operator-779fc9694b-lgbt4\" (UID: \"af5d51b6-54ab-4dd7-a34f-74398364c04c\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.831392 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmvp9\" (UniqueName: \"kubernetes.io/projected/af5d51b6-54ab-4dd7-a34f-74398364c04c-kube-api-access-vmvp9\") pod \"rabbitmq-cluster-operator-779fc9694b-lgbt4\" (UID: \"af5d51b6-54ab-4dd7-a34f-74398364c04c\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.854566 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmvp9\" (UniqueName: \"kubernetes.io/projected/af5d51b6-54ab-4dd7-a34f-74398364c04c-kube-api-access-vmvp9\") pod \"rabbitmq-cluster-operator-779fc9694b-lgbt4\" (UID: \"af5d51b6-54ab-4dd7-a34f-74398364c04c\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4" Nov 26 13:33:37 crc kubenswrapper[4747]: I1126 13:33:37.919207 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4" Nov 26 13:33:38 crc kubenswrapper[4747]: I1126 13:33:38.219784 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4"] Nov 26 13:33:38 crc kubenswrapper[4747]: I1126 13:33:38.875303 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4" event={"ID":"af5d51b6-54ab-4dd7-a34f-74398364c04c","Type":"ContainerStarted","Data":"6aa0c2a52cb4c0a0f74287bfa0ad0792a1d3e7ca236087c0c15b98800aa1ae85"} Nov 26 13:33:41 crc kubenswrapper[4747]: I1126 13:33:41.934545 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4" event={"ID":"af5d51b6-54ab-4dd7-a34f-74398364c04c","Type":"ContainerStarted","Data":"55b31f2850cab7eab1a8ab05bb487c4dcdb58e152a0fb69fe13318224cfec57a"} Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.409195 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-lgbt4" podStartSLOduration=8.47911607 podStartE2EDuration="11.409162564s" podCreationTimestamp="2025-11-26 13:33:37 +0000 UTC" firstStartedPulling="2025-11-26 13:33:38.240263468 +0000 UTC m=+1105.226574483" lastFinishedPulling="2025-11-26 13:33:41.170309962 +0000 UTC m=+1108.156620977" observedRunningTime="2025-11-26 13:33:41.958267018 +0000 UTC m=+1108.944578073" watchObservedRunningTime="2025-11-26 13:33:48.409162564 +0000 UTC m=+1115.395473639" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.410733 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.412631 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.415230 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-erlang-cookie" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.415433 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-default-user" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.415842 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-plugins-conf" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.416135 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-server-dockercfg-rrt9k" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.416762 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-server-conf" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.425168 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.528439 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.528488 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.528583 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.528657 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9189ddc4-93b2-4d7a-9d0b-623a8474a11a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9189ddc4-93b2-4d7a-9d0b-623a8474a11a\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.528775 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl2n6\" (UniqueName: \"kubernetes.io/projected/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-kube-api-access-gl2n6\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.528798 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.528839 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.528882 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.629987 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.630114 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.630156 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.630200 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9189ddc4-93b2-4d7a-9d0b-623a8474a11a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9189ddc4-93b2-4d7a-9d0b-623a8474a11a\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.630311 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl2n6\" (UniqueName: \"kubernetes.io/projected/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-kube-api-access-gl2n6\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.630364 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.630419 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.630466 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.630586 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.631122 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.631672 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.633655 4747 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.633689 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9189ddc4-93b2-4d7a-9d0b-623a8474a11a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9189ddc4-93b2-4d7a-9d0b-623a8474a11a\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/10185f8d476266910c95ed07c76bada28bc0209e842e8f2204871d7b1967bc2a/globalmount\"" pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.638667 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.638686 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.639937 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.658393 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl2n6\" (UniqueName: \"kubernetes.io/projected/e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21-kube-api-access-gl2n6\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.670685 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9189ddc4-93b2-4d7a-9d0b-623a8474a11a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9189ddc4-93b2-4d7a-9d0b-623a8474a11a\") pod \"rabbitmq-server-0\" (UID: \"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:48 crc kubenswrapper[4747]: I1126 13:33:48.738790 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:33:49 crc kubenswrapper[4747]: I1126 13:33:49.266266 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Nov 26 13:33:50 crc kubenswrapper[4747]: I1126 13:33:50.017304 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21","Type":"ContainerStarted","Data":"9abee8fd3a6cfce1843993eb8ebbad536e46289ffa346aa2a4a91ed8d1fa15f5"} Nov 26 13:33:50 crc kubenswrapper[4747]: I1126 13:33:50.033978 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-zzjrs"] Nov 26 13:33:50 crc kubenswrapper[4747]: I1126 13:33:50.040178 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-zzjrs" Nov 26 13:33:50 crc kubenswrapper[4747]: I1126 13:33:50.044700 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-8f2d8" Nov 26 13:33:50 crc kubenswrapper[4747]: I1126 13:33:50.050475 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-zzjrs"] Nov 26 13:33:50 crc kubenswrapper[4747]: I1126 13:33:50.153292 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdcgz\" (UniqueName: \"kubernetes.io/projected/5dddeb24-eccf-47cc-a06d-302ac1cf1c1e-kube-api-access-jdcgz\") pod \"keystone-operator-index-zzjrs\" (UID: \"5dddeb24-eccf-47cc-a06d-302ac1cf1c1e\") " pod="openstack-operators/keystone-operator-index-zzjrs" Nov 26 13:33:50 crc kubenswrapper[4747]: I1126 13:33:50.254548 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdcgz\" (UniqueName: \"kubernetes.io/projected/5dddeb24-eccf-47cc-a06d-302ac1cf1c1e-kube-api-access-jdcgz\") pod \"keystone-operator-index-zzjrs\" (UID: \"5dddeb24-eccf-47cc-a06d-302ac1cf1c1e\") " pod="openstack-operators/keystone-operator-index-zzjrs" Nov 26 13:33:50 crc kubenswrapper[4747]: I1126 13:33:50.277821 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdcgz\" (UniqueName: \"kubernetes.io/projected/5dddeb24-eccf-47cc-a06d-302ac1cf1c1e-kube-api-access-jdcgz\") pod \"keystone-operator-index-zzjrs\" (UID: \"5dddeb24-eccf-47cc-a06d-302ac1cf1c1e\") " pod="openstack-operators/keystone-operator-index-zzjrs" Nov 26 13:33:50 crc kubenswrapper[4747]: I1126 13:33:50.365226 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-zzjrs" Nov 26 13:33:50 crc kubenswrapper[4747]: I1126 13:33:50.830089 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-zzjrs"] Nov 26 13:33:50 crc kubenswrapper[4747]: W1126 13:33:50.838625 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dddeb24_eccf_47cc_a06d_302ac1cf1c1e.slice/crio-5358281727d7904ebf0a294e6f310437d1fc070c38f07576bfb813af607bd0e7 WatchSource:0}: Error finding container 5358281727d7904ebf0a294e6f310437d1fc070c38f07576bfb813af607bd0e7: Status 404 returned error can't find the container with id 5358281727d7904ebf0a294e6f310437d1fc070c38f07576bfb813af607bd0e7 Nov 26 13:33:51 crc kubenswrapper[4747]: I1126 13:33:51.026390 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-zzjrs" event={"ID":"5dddeb24-eccf-47cc-a06d-302ac1cf1c1e","Type":"ContainerStarted","Data":"5358281727d7904ebf0a294e6f310437d1fc070c38f07576bfb813af607bd0e7"} Nov 26 13:33:56 crc kubenswrapper[4747]: I1126 13:33:56.075237 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-zzjrs" event={"ID":"5dddeb24-eccf-47cc-a06d-302ac1cf1c1e","Type":"ContainerStarted","Data":"27899dfc5e915cf0936dd80bf5bb84ddf29cf7637852ed834d913434426b8d21"} Nov 26 13:33:56 crc kubenswrapper[4747]: I1126 13:33:56.103122 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-zzjrs" podStartSLOduration=1.677184987 podStartE2EDuration="6.103101859s" podCreationTimestamp="2025-11-26 13:33:50 +0000 UTC" firstStartedPulling="2025-11-26 13:33:50.840893313 +0000 UTC m=+1117.827204328" lastFinishedPulling="2025-11-26 13:33:55.266810185 +0000 UTC m=+1122.253121200" observedRunningTime="2025-11-26 13:33:56.096259249 +0000 UTC m=+1123.082570304" watchObservedRunningTime="2025-11-26 13:33:56.103101859 +0000 UTC m=+1123.089412894" Nov 26 13:33:57 crc kubenswrapper[4747]: I1126 13:33:57.085253 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21","Type":"ContainerStarted","Data":"430778195b47916a5d8b94d1492cd86994f0b0f789a40f2537352a7dcd419630"} Nov 26 13:34:00 crc kubenswrapper[4747]: I1126 13:34:00.366302 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-zzjrs" Nov 26 13:34:00 crc kubenswrapper[4747]: I1126 13:34:00.366732 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-zzjrs" Nov 26 13:34:00 crc kubenswrapper[4747]: I1126 13:34:00.407652 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-zzjrs" Nov 26 13:34:01 crc kubenswrapper[4747]: I1126 13:34:01.152280 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-zzjrs" Nov 26 13:34:03 crc kubenswrapper[4747]: I1126 13:34:03.418282 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:34:03 crc kubenswrapper[4747]: I1126 13:34:03.418371 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:34:08 crc kubenswrapper[4747]: I1126 13:34:08.872474 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc"] Nov 26 13:34:08 crc kubenswrapper[4747]: I1126 13:34:08.877153 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:08 crc kubenswrapper[4747]: I1126 13:34:08.881695 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rm7vv" Nov 26 13:34:08 crc kubenswrapper[4747]: I1126 13:34:08.890034 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc"] Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.034870 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rrqh\" (UniqueName: \"kubernetes.io/projected/643ba26b-247f-4601-a33e-60a171de4a37-kube-api-access-9rrqh\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.035012 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-util\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.035119 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-bundle\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.136002 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-util\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.136384 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-bundle\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.136429 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rrqh\" (UniqueName: \"kubernetes.io/projected/643ba26b-247f-4601-a33e-60a171de4a37-kube-api-access-9rrqh\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.136609 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-util\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.136907 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-bundle\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.166253 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rrqh\" (UniqueName: \"kubernetes.io/projected/643ba26b-247f-4601-a33e-60a171de4a37-kube-api-access-9rrqh\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.191240 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:09 crc kubenswrapper[4747]: I1126 13:34:09.622943 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc"] Nov 26 13:34:09 crc kubenswrapper[4747]: W1126 13:34:09.628212 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod643ba26b_247f_4601_a33e_60a171de4a37.slice/crio-2f64d36bc093779cb25ea7c767bc8fc3402b2caf717f699f46f8e78dfa805e76 WatchSource:0}: Error finding container 2f64d36bc093779cb25ea7c767bc8fc3402b2caf717f699f46f8e78dfa805e76: Status 404 returned error can't find the container with id 2f64d36bc093779cb25ea7c767bc8fc3402b2caf717f699f46f8e78dfa805e76 Nov 26 13:34:10 crc kubenswrapper[4747]: I1126 13:34:10.172069 4747 generic.go:334] "Generic (PLEG): container finished" podID="643ba26b-247f-4601-a33e-60a171de4a37" containerID="909606c82c29261655151bfc1e1ae7189833be299291f39718965834c29a190e" exitCode=0 Nov 26 13:34:10 crc kubenswrapper[4747]: I1126 13:34:10.172120 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" event={"ID":"643ba26b-247f-4601-a33e-60a171de4a37","Type":"ContainerDied","Data":"909606c82c29261655151bfc1e1ae7189833be299291f39718965834c29a190e"} Nov 26 13:34:10 crc kubenswrapper[4747]: I1126 13:34:10.172417 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" event={"ID":"643ba26b-247f-4601-a33e-60a171de4a37","Type":"ContainerStarted","Data":"2f64d36bc093779cb25ea7c767bc8fc3402b2caf717f699f46f8e78dfa805e76"} Nov 26 13:34:15 crc kubenswrapper[4747]: I1126 13:34:15.210387 4747 generic.go:334] "Generic (PLEG): container finished" podID="643ba26b-247f-4601-a33e-60a171de4a37" containerID="778a937987f81bb51674bb6a0087b07bafb72b1c73b2f8403785e60d11e84500" exitCode=0 Nov 26 13:34:15 crc kubenswrapper[4747]: I1126 13:34:15.210510 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" event={"ID":"643ba26b-247f-4601-a33e-60a171de4a37","Type":"ContainerDied","Data":"778a937987f81bb51674bb6a0087b07bafb72b1c73b2f8403785e60d11e84500"} Nov 26 13:34:16 crc kubenswrapper[4747]: I1126 13:34:16.221081 4747 generic.go:334] "Generic (PLEG): container finished" podID="643ba26b-247f-4601-a33e-60a171de4a37" containerID="dbeeb23fe55240aa4f0bea8c92effd29b9827599dca468302a714fbaa6ccf1fd" exitCode=0 Nov 26 13:34:16 crc kubenswrapper[4747]: I1126 13:34:16.221139 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" event={"ID":"643ba26b-247f-4601-a33e-60a171de4a37","Type":"ContainerDied","Data":"dbeeb23fe55240aa4f0bea8c92effd29b9827599dca468302a714fbaa6ccf1fd"} Nov 26 13:34:17 crc kubenswrapper[4747]: I1126 13:34:17.545514 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:17 crc kubenswrapper[4747]: I1126 13:34:17.652108 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-util\") pod \"643ba26b-247f-4601-a33e-60a171de4a37\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " Nov 26 13:34:17 crc kubenswrapper[4747]: I1126 13:34:17.652182 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-bundle\") pod \"643ba26b-247f-4601-a33e-60a171de4a37\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " Nov 26 13:34:17 crc kubenswrapper[4747]: I1126 13:34:17.652211 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rrqh\" (UniqueName: \"kubernetes.io/projected/643ba26b-247f-4601-a33e-60a171de4a37-kube-api-access-9rrqh\") pod \"643ba26b-247f-4601-a33e-60a171de4a37\" (UID: \"643ba26b-247f-4601-a33e-60a171de4a37\") " Nov 26 13:34:17 crc kubenswrapper[4747]: I1126 13:34:17.653599 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-bundle" (OuterVolumeSpecName: "bundle") pod "643ba26b-247f-4601-a33e-60a171de4a37" (UID: "643ba26b-247f-4601-a33e-60a171de4a37"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:34:17 crc kubenswrapper[4747]: I1126 13:34:17.660069 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/643ba26b-247f-4601-a33e-60a171de4a37-kube-api-access-9rrqh" (OuterVolumeSpecName: "kube-api-access-9rrqh") pod "643ba26b-247f-4601-a33e-60a171de4a37" (UID: "643ba26b-247f-4601-a33e-60a171de4a37"). InnerVolumeSpecName "kube-api-access-9rrqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:34:17 crc kubenswrapper[4747]: I1126 13:34:17.663228 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-util" (OuterVolumeSpecName: "util") pod "643ba26b-247f-4601-a33e-60a171de4a37" (UID: "643ba26b-247f-4601-a33e-60a171de4a37"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:34:17 crc kubenswrapper[4747]: I1126 13:34:17.754683 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:34:17 crc kubenswrapper[4747]: I1126 13:34:17.754753 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/643ba26b-247f-4601-a33e-60a171de4a37-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:34:17 crc kubenswrapper[4747]: I1126 13:34:17.754773 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rrqh\" (UniqueName: \"kubernetes.io/projected/643ba26b-247f-4601-a33e-60a171de4a37-kube-api-access-9rrqh\") on node \"crc\" DevicePath \"\"" Nov 26 13:34:18 crc kubenswrapper[4747]: I1126 13:34:18.236592 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" event={"ID":"643ba26b-247f-4601-a33e-60a171de4a37","Type":"ContainerDied","Data":"2f64d36bc093779cb25ea7c767bc8fc3402b2caf717f699f46f8e78dfa805e76"} Nov 26 13:34:18 crc kubenswrapper[4747]: I1126 13:34:18.236896 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc" Nov 26 13:34:18 crc kubenswrapper[4747]: I1126 13:34:18.236922 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f64d36bc093779cb25ea7c767bc8fc3402b2caf717f699f46f8e78dfa805e76" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.598210 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2"] Nov 26 13:34:25 crc kubenswrapper[4747]: E1126 13:34:25.598963 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643ba26b-247f-4601-a33e-60a171de4a37" containerName="pull" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.598979 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="643ba26b-247f-4601-a33e-60a171de4a37" containerName="pull" Nov 26 13:34:25 crc kubenswrapper[4747]: E1126 13:34:25.598994 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643ba26b-247f-4601-a33e-60a171de4a37" containerName="extract" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.599001 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="643ba26b-247f-4601-a33e-60a171de4a37" containerName="extract" Nov 26 13:34:25 crc kubenswrapper[4747]: E1126 13:34:25.599016 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643ba26b-247f-4601-a33e-60a171de4a37" containerName="util" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.599023 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="643ba26b-247f-4601-a33e-60a171de4a37" containerName="util" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.599178 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="643ba26b-247f-4601-a33e-60a171de4a37" containerName="extract" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.599704 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.601814 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.602291 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-bgl5q" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.611976 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2"] Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.754602 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f06c40e2-33fe-4c14-9f47-5b7ea72c2582-apiservice-cert\") pod \"keystone-operator-controller-manager-9bbbdb54c-nzgl2\" (UID: \"f06c40e2-33fe-4c14-9f47-5b7ea72c2582\") " pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.754785 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f06c40e2-33fe-4c14-9f47-5b7ea72c2582-webhook-cert\") pod \"keystone-operator-controller-manager-9bbbdb54c-nzgl2\" (UID: \"f06c40e2-33fe-4c14-9f47-5b7ea72c2582\") " pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.754867 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv79g\" (UniqueName: \"kubernetes.io/projected/f06c40e2-33fe-4c14-9f47-5b7ea72c2582-kube-api-access-cv79g\") pod \"keystone-operator-controller-manager-9bbbdb54c-nzgl2\" (UID: \"f06c40e2-33fe-4c14-9f47-5b7ea72c2582\") " pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.856119 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f06c40e2-33fe-4c14-9f47-5b7ea72c2582-apiservice-cert\") pod \"keystone-operator-controller-manager-9bbbdb54c-nzgl2\" (UID: \"f06c40e2-33fe-4c14-9f47-5b7ea72c2582\") " pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.856474 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f06c40e2-33fe-4c14-9f47-5b7ea72c2582-webhook-cert\") pod \"keystone-operator-controller-manager-9bbbdb54c-nzgl2\" (UID: \"f06c40e2-33fe-4c14-9f47-5b7ea72c2582\") " pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.856574 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv79g\" (UniqueName: \"kubernetes.io/projected/f06c40e2-33fe-4c14-9f47-5b7ea72c2582-kube-api-access-cv79g\") pod \"keystone-operator-controller-manager-9bbbdb54c-nzgl2\" (UID: \"f06c40e2-33fe-4c14-9f47-5b7ea72c2582\") " pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.862406 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f06c40e2-33fe-4c14-9f47-5b7ea72c2582-apiservice-cert\") pod \"keystone-operator-controller-manager-9bbbdb54c-nzgl2\" (UID: \"f06c40e2-33fe-4c14-9f47-5b7ea72c2582\") " pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.862838 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f06c40e2-33fe-4c14-9f47-5b7ea72c2582-webhook-cert\") pod \"keystone-operator-controller-manager-9bbbdb54c-nzgl2\" (UID: \"f06c40e2-33fe-4c14-9f47-5b7ea72c2582\") " pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.871181 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv79g\" (UniqueName: \"kubernetes.io/projected/f06c40e2-33fe-4c14-9f47-5b7ea72c2582-kube-api-access-cv79g\") pod \"keystone-operator-controller-manager-9bbbdb54c-nzgl2\" (UID: \"f06c40e2-33fe-4c14-9f47-5b7ea72c2582\") " pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:25 crc kubenswrapper[4747]: I1126 13:34:25.920603 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:26 crc kubenswrapper[4747]: I1126 13:34:26.332164 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2"] Nov 26 13:34:26 crc kubenswrapper[4747]: W1126 13:34:26.340185 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf06c40e2_33fe_4c14_9f47_5b7ea72c2582.slice/crio-a8aa073cd845661e87d87f2a333ef68d23cecfa5c1d43d550f3cc64969e56c6d WatchSource:0}: Error finding container a8aa073cd845661e87d87f2a333ef68d23cecfa5c1d43d550f3cc64969e56c6d: Status 404 returned error can't find the container with id a8aa073cd845661e87d87f2a333ef68d23cecfa5c1d43d550f3cc64969e56c6d Nov 26 13:34:27 crc kubenswrapper[4747]: I1126 13:34:27.302828 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" event={"ID":"f06c40e2-33fe-4c14-9f47-5b7ea72c2582","Type":"ContainerStarted","Data":"a8aa073cd845661e87d87f2a333ef68d23cecfa5c1d43d550f3cc64969e56c6d"} Nov 26 13:34:29 crc kubenswrapper[4747]: I1126 13:34:29.316917 4747 generic.go:334] "Generic (PLEG): container finished" podID="e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21" containerID="430778195b47916a5d8b94d1492cd86994f0b0f789a40f2537352a7dcd419630" exitCode=0 Nov 26 13:34:29 crc kubenswrapper[4747]: I1126 13:34:29.317021 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21","Type":"ContainerDied","Data":"430778195b47916a5d8b94d1492cd86994f0b0f789a40f2537352a7dcd419630"} Nov 26 13:34:31 crc kubenswrapper[4747]: I1126 13:34:31.331241 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21","Type":"ContainerStarted","Data":"bb127abeab3ec6cee83bf11fc014e4f7fc48f3156cb145b728ca757161657b59"} Nov 26 13:34:31 crc kubenswrapper[4747]: I1126 13:34:31.332532 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:34:31 crc kubenswrapper[4747]: I1126 13:34:31.358979 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/rabbitmq-server-0" podStartSLOduration=38.327270768 podStartE2EDuration="44.358962161s" podCreationTimestamp="2025-11-26 13:33:47 +0000 UTC" firstStartedPulling="2025-11-26 13:33:49.28143803 +0000 UTC m=+1116.267749075" lastFinishedPulling="2025-11-26 13:33:55.313128893 +0000 UTC m=+1122.299440468" observedRunningTime="2025-11-26 13:34:31.35768978 +0000 UTC m=+1158.344000795" watchObservedRunningTime="2025-11-26 13:34:31.358962161 +0000 UTC m=+1158.345273176" Nov 26 13:34:33 crc kubenswrapper[4747]: I1126 13:34:33.418022 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:34:33 crc kubenswrapper[4747]: I1126 13:34:33.418440 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:34:33 crc kubenswrapper[4747]: I1126 13:34:33.418492 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:34:33 crc kubenswrapper[4747]: I1126 13:34:33.419212 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb44baaeaa989e478f325c470fbc512f8c7deb6097dda428644ec386db6331c1"} pod="openshift-machine-config-operator/machine-config-daemon-hjc55" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:34:33 crc kubenswrapper[4747]: I1126 13:34:33.419287 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" containerID="cri-o://bb44baaeaa989e478f325c470fbc512f8c7deb6097dda428644ec386db6331c1" gracePeriod=600 Nov 26 13:34:34 crc kubenswrapper[4747]: I1126 13:34:34.355788 4747 generic.go:334] "Generic (PLEG): container finished" podID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerID="bb44baaeaa989e478f325c470fbc512f8c7deb6097dda428644ec386db6331c1" exitCode=0 Nov 26 13:34:34 crc kubenswrapper[4747]: I1126 13:34:34.355816 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerDied","Data":"bb44baaeaa989e478f325c470fbc512f8c7deb6097dda428644ec386db6331c1"} Nov 26 13:34:34 crc kubenswrapper[4747]: I1126 13:34:34.356199 4747 scope.go:117] "RemoveContainer" containerID="18a51e290df690603e34ca806c79b649af6148fb3c9197b6098b541a9c3c88de" Nov 26 13:34:35 crc kubenswrapper[4747]: I1126 13:34:35.363505 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"30240290fdfc14964fe95aadfb3a6164f4e3e5dee1aed61ae4404c0699f012b8"} Nov 26 13:34:35 crc kubenswrapper[4747]: I1126 13:34:35.364603 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" event={"ID":"f06c40e2-33fe-4c14-9f47-5b7ea72c2582","Type":"ContainerStarted","Data":"05be3d97f3cf186ca359226b150bad5161b25c48fdf5d9521e1beb62e8d3767b"} Nov 26 13:34:36 crc kubenswrapper[4747]: I1126 13:34:36.375495 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:36 crc kubenswrapper[4747]: I1126 13:34:36.394619 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" podStartSLOduration=2.648807084 podStartE2EDuration="11.394594348s" podCreationTimestamp="2025-11-26 13:34:25 +0000 UTC" firstStartedPulling="2025-11-26 13:34:26.342654662 +0000 UTC m=+1153.328965717" lastFinishedPulling="2025-11-26 13:34:35.088441956 +0000 UTC m=+1162.074752981" observedRunningTime="2025-11-26 13:34:36.393600844 +0000 UTC m=+1163.379911859" watchObservedRunningTime="2025-11-26 13:34:36.394594348 +0000 UTC m=+1163.380905383" Nov 26 13:34:45 crc kubenswrapper[4747]: I1126 13:34:45.927188 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-9bbbdb54c-nzgl2" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.190676 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-4e67-account-create-update-gzgls"] Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.191678 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.193640 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-db-secret" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.206411 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-create-npbxv"] Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.207310 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-npbxv" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.217901 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-npbxv"] Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.248560 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-4e67-account-create-update-gzgls"] Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.324553 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d78c2b07-9725-4820-81af-7f684201bece-operator-scripts\") pod \"keystone-db-create-npbxv\" (UID: \"d78c2b07-9725-4820-81af-7f684201bece\") " pod="glance-kuttl-tests/keystone-db-create-npbxv" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.324602 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9vc\" (UniqueName: \"kubernetes.io/projected/d78c2b07-9725-4820-81af-7f684201bece-kube-api-access-9z9vc\") pod \"keystone-db-create-npbxv\" (UID: \"d78c2b07-9725-4820-81af-7f684201bece\") " pod="glance-kuttl-tests/keystone-db-create-npbxv" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.324641 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f5kz\" (UniqueName: \"kubernetes.io/projected/add568ec-ad1a-489b-a5b3-802f0cc37a8a-kube-api-access-5f5kz\") pod \"keystone-4e67-account-create-update-gzgls\" (UID: \"add568ec-ad1a-489b-a5b3-802f0cc37a8a\") " pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.324674 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add568ec-ad1a-489b-a5b3-802f0cc37a8a-operator-scripts\") pod \"keystone-4e67-account-create-update-gzgls\" (UID: \"add568ec-ad1a-489b-a5b3-802f0cc37a8a\") " pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.426485 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d78c2b07-9725-4820-81af-7f684201bece-operator-scripts\") pod \"keystone-db-create-npbxv\" (UID: \"d78c2b07-9725-4820-81af-7f684201bece\") " pod="glance-kuttl-tests/keystone-db-create-npbxv" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.426607 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9vc\" (UniqueName: \"kubernetes.io/projected/d78c2b07-9725-4820-81af-7f684201bece-kube-api-access-9z9vc\") pod \"keystone-db-create-npbxv\" (UID: \"d78c2b07-9725-4820-81af-7f684201bece\") " pod="glance-kuttl-tests/keystone-db-create-npbxv" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.426706 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f5kz\" (UniqueName: \"kubernetes.io/projected/add568ec-ad1a-489b-a5b3-802f0cc37a8a-kube-api-access-5f5kz\") pod \"keystone-4e67-account-create-update-gzgls\" (UID: \"add568ec-ad1a-489b-a5b3-802f0cc37a8a\") " pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.426774 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add568ec-ad1a-489b-a5b3-802f0cc37a8a-operator-scripts\") pod \"keystone-4e67-account-create-update-gzgls\" (UID: \"add568ec-ad1a-489b-a5b3-802f0cc37a8a\") " pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.572372 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d78c2b07-9725-4820-81af-7f684201bece-operator-scripts\") pod \"keystone-db-create-npbxv\" (UID: \"d78c2b07-9725-4820-81af-7f684201bece\") " pod="glance-kuttl-tests/keystone-db-create-npbxv" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.572389 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add568ec-ad1a-489b-a5b3-802f0cc37a8a-operator-scripts\") pod \"keystone-4e67-account-create-update-gzgls\" (UID: \"add568ec-ad1a-489b-a5b3-802f0cc37a8a\") " pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.577031 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f5kz\" (UniqueName: \"kubernetes.io/projected/add568ec-ad1a-489b-a5b3-802f0cc37a8a-kube-api-access-5f5kz\") pod \"keystone-4e67-account-create-update-gzgls\" (UID: \"add568ec-ad1a-489b-a5b3-802f0cc37a8a\") " pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.581570 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9vc\" (UniqueName: \"kubernetes.io/projected/d78c2b07-9725-4820-81af-7f684201bece-kube-api-access-9z9vc\") pod \"keystone-db-create-npbxv\" (UID: \"d78c2b07-9725-4820-81af-7f684201bece\") " pod="glance-kuttl-tests/keystone-db-create-npbxv" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.741905 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/rabbitmq-server-0" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.807081 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" Nov 26 13:34:48 crc kubenswrapper[4747]: I1126 13:34:48.820825 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-npbxv" Nov 26 13:34:49 crc kubenswrapper[4747]: W1126 13:34:49.246023 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd78c2b07_9725_4820_81af_7f684201bece.slice/crio-d131d148978d94f43d56140cbd0e66f49cb19c9ff50739008d32685001648585 WatchSource:0}: Error finding container d131d148978d94f43d56140cbd0e66f49cb19c9ff50739008d32685001648585: Status 404 returned error can't find the container with id d131d148978d94f43d56140cbd0e66f49cb19c9ff50739008d32685001648585 Nov 26 13:34:49 crc kubenswrapper[4747]: I1126 13:34:49.248675 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-npbxv"] Nov 26 13:34:49 crc kubenswrapper[4747]: I1126 13:34:49.295967 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-4e67-account-create-update-gzgls"] Nov 26 13:34:49 crc kubenswrapper[4747]: I1126 13:34:49.466190 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" event={"ID":"add568ec-ad1a-489b-a5b3-802f0cc37a8a","Type":"ContainerStarted","Data":"6fb3a0c0fe75857bc683dbfd91bf7997915d18b0c1fd72387be2f7dbf4fb37d3"} Nov 26 13:34:49 crc kubenswrapper[4747]: I1126 13:34:49.466235 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" event={"ID":"add568ec-ad1a-489b-a5b3-802f0cc37a8a","Type":"ContainerStarted","Data":"f1ced83c231d7111503630e8ae02df385ff1139d8515dfac9c040d930cafd77a"} Nov 26 13:34:49 crc kubenswrapper[4747]: I1126 13:34:49.468356 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-npbxv" event={"ID":"d78c2b07-9725-4820-81af-7f684201bece","Type":"ContainerStarted","Data":"0ae81ae763777e57cbaf96c6e8e88be24e9866759631d8db3cebacbb49e5bfe6"} Nov 26 13:34:49 crc kubenswrapper[4747]: I1126 13:34:49.468412 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-npbxv" event={"ID":"d78c2b07-9725-4820-81af-7f684201bece","Type":"ContainerStarted","Data":"d131d148978d94f43d56140cbd0e66f49cb19c9ff50739008d32685001648585"} Nov 26 13:34:49 crc kubenswrapper[4747]: I1126 13:34:49.490503 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-db-create-npbxv" podStartSLOduration=1.490487133 podStartE2EDuration="1.490487133s" podCreationTimestamp="2025-11-26 13:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:34:49.489177051 +0000 UTC m=+1176.475488076" watchObservedRunningTime="2025-11-26 13:34:49.490487133 +0000 UTC m=+1176.476798148" Nov 26 13:34:50 crc kubenswrapper[4747]: I1126 13:34:50.477135 4747 generic.go:334] "Generic (PLEG): container finished" podID="d78c2b07-9725-4820-81af-7f684201bece" containerID="0ae81ae763777e57cbaf96c6e8e88be24e9866759631d8db3cebacbb49e5bfe6" exitCode=0 Nov 26 13:34:50 crc kubenswrapper[4747]: I1126 13:34:50.477204 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-npbxv" event={"ID":"d78c2b07-9725-4820-81af-7f684201bece","Type":"ContainerDied","Data":"0ae81ae763777e57cbaf96c6e8e88be24e9866759631d8db3cebacbb49e5bfe6"} Nov 26 13:34:50 crc kubenswrapper[4747]: I1126 13:34:50.480008 4747 generic.go:334] "Generic (PLEG): container finished" podID="add568ec-ad1a-489b-a5b3-802f0cc37a8a" containerID="6fb3a0c0fe75857bc683dbfd91bf7997915d18b0c1fd72387be2f7dbf4fb37d3" exitCode=0 Nov 26 13:34:50 crc kubenswrapper[4747]: I1126 13:34:50.480042 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" event={"ID":"add568ec-ad1a-489b-a5b3-802f0cc37a8a","Type":"ContainerDied","Data":"6fb3a0c0fe75857bc683dbfd91bf7997915d18b0c1fd72387be2f7dbf4fb37d3"} Nov 26 13:34:50 crc kubenswrapper[4747]: I1126 13:34:50.821029 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-index-8zsmn"] Nov 26 13:34:50 crc kubenswrapper[4747]: I1126 13:34:50.821923 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-8zsmn" Nov 26 13:34:50 crc kubenswrapper[4747]: I1126 13:34:50.824417 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-index-dockercfg-ljzkj" Nov 26 13:34:50 crc kubenswrapper[4747]: I1126 13:34:50.838658 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-8zsmn"] Nov 26 13:34:50 crc kubenswrapper[4747]: I1126 13:34:50.963705 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vkhb\" (UniqueName: \"kubernetes.io/projected/1f50f300-6e38-4c1e-9679-76ee15a7dcd1-kube-api-access-6vkhb\") pod \"horizon-operator-index-8zsmn\" (UID: \"1f50f300-6e38-4c1e-9679-76ee15a7dcd1\") " pod="openstack-operators/horizon-operator-index-8zsmn" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.064965 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vkhb\" (UniqueName: \"kubernetes.io/projected/1f50f300-6e38-4c1e-9679-76ee15a7dcd1-kube-api-access-6vkhb\") pod \"horizon-operator-index-8zsmn\" (UID: \"1f50f300-6e38-4c1e-9679-76ee15a7dcd1\") " pod="openstack-operators/horizon-operator-index-8zsmn" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.089047 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vkhb\" (UniqueName: \"kubernetes.io/projected/1f50f300-6e38-4c1e-9679-76ee15a7dcd1-kube-api-access-6vkhb\") pod \"horizon-operator-index-8zsmn\" (UID: \"1f50f300-6e38-4c1e-9679-76ee15a7dcd1\") " pod="openstack-operators/horizon-operator-index-8zsmn" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.137676 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-8zsmn" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.548765 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-8zsmn"] Nov 26 13:34:51 crc kubenswrapper[4747]: W1126 13:34:51.563503 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f50f300_6e38_4c1e_9679_76ee15a7dcd1.slice/crio-42ee25f9f7165f6109d264412afd656a321ed5012c7ede21d231890d124f86aa WatchSource:0}: Error finding container 42ee25f9f7165f6109d264412afd656a321ed5012c7ede21d231890d124f86aa: Status 404 returned error can't find the container with id 42ee25f9f7165f6109d264412afd656a321ed5012c7ede21d231890d124f86aa Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.747269 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.765709 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-npbxv" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.875034 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add568ec-ad1a-489b-a5b3-802f0cc37a8a-operator-scripts\") pod \"add568ec-ad1a-489b-a5b3-802f0cc37a8a\" (UID: \"add568ec-ad1a-489b-a5b3-802f0cc37a8a\") " Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.875325 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d78c2b07-9725-4820-81af-7f684201bece-operator-scripts\") pod \"d78c2b07-9725-4820-81af-7f684201bece\" (UID: \"d78c2b07-9725-4820-81af-7f684201bece\") " Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.876166 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/add568ec-ad1a-489b-a5b3-802f0cc37a8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "add568ec-ad1a-489b-a5b3-802f0cc37a8a" (UID: "add568ec-ad1a-489b-a5b3-802f0cc37a8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.876169 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d78c2b07-9725-4820-81af-7f684201bece-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d78c2b07-9725-4820-81af-7f684201bece" (UID: "d78c2b07-9725-4820-81af-7f684201bece"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.876251 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f5kz\" (UniqueName: \"kubernetes.io/projected/add568ec-ad1a-489b-a5b3-802f0cc37a8a-kube-api-access-5f5kz\") pod \"add568ec-ad1a-489b-a5b3-802f0cc37a8a\" (UID: \"add568ec-ad1a-489b-a5b3-802f0cc37a8a\") " Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.877224 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z9vc\" (UniqueName: \"kubernetes.io/projected/d78c2b07-9725-4820-81af-7f684201bece-kube-api-access-9z9vc\") pod \"d78c2b07-9725-4820-81af-7f684201bece\" (UID: \"d78c2b07-9725-4820-81af-7f684201bece\") " Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.878778 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add568ec-ad1a-489b-a5b3-802f0cc37a8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.878801 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d78c2b07-9725-4820-81af-7f684201bece-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.882293 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78c2b07-9725-4820-81af-7f684201bece-kube-api-access-9z9vc" (OuterVolumeSpecName: "kube-api-access-9z9vc") pod "d78c2b07-9725-4820-81af-7f684201bece" (UID: "d78c2b07-9725-4820-81af-7f684201bece"). InnerVolumeSpecName "kube-api-access-9z9vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.882332 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add568ec-ad1a-489b-a5b3-802f0cc37a8a-kube-api-access-5f5kz" (OuterVolumeSpecName: "kube-api-access-5f5kz") pod "add568ec-ad1a-489b-a5b3-802f0cc37a8a" (UID: "add568ec-ad1a-489b-a5b3-802f0cc37a8a"). InnerVolumeSpecName "kube-api-access-5f5kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.980252 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f5kz\" (UniqueName: \"kubernetes.io/projected/add568ec-ad1a-489b-a5b3-802f0cc37a8a-kube-api-access-5f5kz\") on node \"crc\" DevicePath \"\"" Nov 26 13:34:51 crc kubenswrapper[4747]: I1126 13:34:51.980289 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z9vc\" (UniqueName: \"kubernetes.io/projected/d78c2b07-9725-4820-81af-7f684201bece-kube-api-access-9z9vc\") on node \"crc\" DevicePath \"\"" Nov 26 13:34:52 crc kubenswrapper[4747]: I1126 13:34:52.493049 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" Nov 26 13:34:52 crc kubenswrapper[4747]: I1126 13:34:52.493018 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-4e67-account-create-update-gzgls" event={"ID":"add568ec-ad1a-489b-a5b3-802f0cc37a8a","Type":"ContainerDied","Data":"f1ced83c231d7111503630e8ae02df385ff1139d8515dfac9c040d930cafd77a"} Nov 26 13:34:52 crc kubenswrapper[4747]: I1126 13:34:52.493257 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ced83c231d7111503630e8ae02df385ff1139d8515dfac9c040d930cafd77a" Nov 26 13:34:52 crc kubenswrapper[4747]: I1126 13:34:52.494007 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-8zsmn" event={"ID":"1f50f300-6e38-4c1e-9679-76ee15a7dcd1","Type":"ContainerStarted","Data":"42ee25f9f7165f6109d264412afd656a321ed5012c7ede21d231890d124f86aa"} Nov 26 13:34:52 crc kubenswrapper[4747]: I1126 13:34:52.495357 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-npbxv" event={"ID":"d78c2b07-9725-4820-81af-7f684201bece","Type":"ContainerDied","Data":"d131d148978d94f43d56140cbd0e66f49cb19c9ff50739008d32685001648585"} Nov 26 13:34:52 crc kubenswrapper[4747]: I1126 13:34:52.495381 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d131d148978d94f43d56140cbd0e66f49cb19c9ff50739008d32685001648585" Nov 26 13:34:52 crc kubenswrapper[4747]: I1126 13:34:52.495396 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-npbxv" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.747642 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-sync-62sk4"] Nov 26 13:34:53 crc kubenswrapper[4747]: E1126 13:34:53.749283 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add568ec-ad1a-489b-a5b3-802f0cc37a8a" containerName="mariadb-account-create-update" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.749411 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="add568ec-ad1a-489b-a5b3-802f0cc37a8a" containerName="mariadb-account-create-update" Nov 26 13:34:53 crc kubenswrapper[4747]: E1126 13:34:53.749533 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78c2b07-9725-4820-81af-7f684201bece" containerName="mariadb-database-create" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.749828 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78c2b07-9725-4820-81af-7f684201bece" containerName="mariadb-database-create" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.750174 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="add568ec-ad1a-489b-a5b3-802f0cc37a8a" containerName="mariadb-account-create-update" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.750294 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78c2b07-9725-4820-81af-7f684201bece" containerName="mariadb-database-create" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.750937 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-62sk4" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.756841 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.756908 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.757114 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.758119 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-4klb9" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.763614 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-62sk4"] Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.904668 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zgxk\" (UniqueName: \"kubernetes.io/projected/81d595c7-cabd-4c96-a992-a417f16b449b-kube-api-access-2zgxk\") pod \"keystone-db-sync-62sk4\" (UID: \"81d595c7-cabd-4c96-a992-a417f16b449b\") " pod="glance-kuttl-tests/keystone-db-sync-62sk4" Nov 26 13:34:53 crc kubenswrapper[4747]: I1126 13:34:53.904737 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d595c7-cabd-4c96-a992-a417f16b449b-config-data\") pod \"keystone-db-sync-62sk4\" (UID: \"81d595c7-cabd-4c96-a992-a417f16b449b\") " pod="glance-kuttl-tests/keystone-db-sync-62sk4" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.007040 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zgxk\" (UniqueName: \"kubernetes.io/projected/81d595c7-cabd-4c96-a992-a417f16b449b-kube-api-access-2zgxk\") pod \"keystone-db-sync-62sk4\" (UID: \"81d595c7-cabd-4c96-a992-a417f16b449b\") " pod="glance-kuttl-tests/keystone-db-sync-62sk4" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.007124 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d595c7-cabd-4c96-a992-a417f16b449b-config-data\") pod \"keystone-db-sync-62sk4\" (UID: \"81d595c7-cabd-4c96-a992-a417f16b449b\") " pod="glance-kuttl-tests/keystone-db-sync-62sk4" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.015854 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d595c7-cabd-4c96-a992-a417f16b449b-config-data\") pod \"keystone-db-sync-62sk4\" (UID: \"81d595c7-cabd-4c96-a992-a417f16b449b\") " pod="glance-kuttl-tests/keystone-db-sync-62sk4" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.018331 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-gjgrg"] Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.019379 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-gjgrg" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.021538 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-w2fq9" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.027526 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-gjgrg"] Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.032156 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zgxk\" (UniqueName: \"kubernetes.io/projected/81d595c7-cabd-4c96-a992-a417f16b449b-kube-api-access-2zgxk\") pod \"keystone-db-sync-62sk4\" (UID: \"81d595c7-cabd-4c96-a992-a417f16b449b\") " pod="glance-kuttl-tests/keystone-db-sync-62sk4" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.070195 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-62sk4" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.109242 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm7vv\" (UniqueName: \"kubernetes.io/projected/29be9301-9c9c-4153-844a-8d729443beb3-kube-api-access-qm7vv\") pod \"swift-operator-index-gjgrg\" (UID: \"29be9301-9c9c-4153-844a-8d729443beb3\") " pod="openstack-operators/swift-operator-index-gjgrg" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.213737 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm7vv\" (UniqueName: \"kubernetes.io/projected/29be9301-9c9c-4153-844a-8d729443beb3-kube-api-access-qm7vv\") pod \"swift-operator-index-gjgrg\" (UID: \"29be9301-9c9c-4153-844a-8d729443beb3\") " pod="openstack-operators/swift-operator-index-gjgrg" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.233158 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm7vv\" (UniqueName: \"kubernetes.io/projected/29be9301-9c9c-4153-844a-8d729443beb3-kube-api-access-qm7vv\") pod \"swift-operator-index-gjgrg\" (UID: \"29be9301-9c9c-4153-844a-8d729443beb3\") " pod="openstack-operators/swift-operator-index-gjgrg" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.292987 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-62sk4"] Nov 26 13:34:54 crc kubenswrapper[4747]: W1126 13:34:54.298632 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81d595c7_cabd_4c96_a992_a417f16b449b.slice/crio-bd4300afa1c96c53cc625892302c9e31c8b534906248b8cb16e432d95732f33b WatchSource:0}: Error finding container bd4300afa1c96c53cc625892302c9e31c8b534906248b8cb16e432d95732f33b: Status 404 returned error can't find the container with id bd4300afa1c96c53cc625892302c9e31c8b534906248b8cb16e432d95732f33b Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.370238 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-gjgrg" Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.510117 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-62sk4" event={"ID":"81d595c7-cabd-4c96-a992-a417f16b449b","Type":"ContainerStarted","Data":"bd4300afa1c96c53cc625892302c9e31c8b534906248b8cb16e432d95732f33b"} Nov 26 13:34:54 crc kubenswrapper[4747]: I1126 13:34:54.823270 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-gjgrg"] Nov 26 13:34:54 crc kubenswrapper[4747]: W1126 13:34:54.825925 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29be9301_9c9c_4153_844a_8d729443beb3.slice/crio-5e5e01bdcfade0a6d328027f77c6de8a371aac6c3f7beb6557e5ab10e86cb5a1 WatchSource:0}: Error finding container 5e5e01bdcfade0a6d328027f77c6de8a371aac6c3f7beb6557e5ab10e86cb5a1: Status 404 returned error can't find the container with id 5e5e01bdcfade0a6d328027f77c6de8a371aac6c3f7beb6557e5ab10e86cb5a1 Nov 26 13:34:55 crc kubenswrapper[4747]: I1126 13:34:55.522509 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-gjgrg" event={"ID":"29be9301-9c9c-4153-844a-8d729443beb3","Type":"ContainerStarted","Data":"5e5e01bdcfade0a6d328027f77c6de8a371aac6c3f7beb6557e5ab10e86cb5a1"} Nov 26 13:34:55 crc kubenswrapper[4747]: I1126 13:34:55.524203 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-8zsmn" event={"ID":"1f50f300-6e38-4c1e-9679-76ee15a7dcd1","Type":"ContainerStarted","Data":"4e01224670b45c9c0e731e51a0e3631edeb6221a66cdf1e0560b457caa6a137f"} Nov 26 13:34:55 crc kubenswrapper[4747]: I1126 13:34:55.539571 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-index-8zsmn" podStartSLOduration=2.804500278 podStartE2EDuration="5.539551717s" podCreationTimestamp="2025-11-26 13:34:50 +0000 UTC" firstStartedPulling="2025-11-26 13:34:51.56636136 +0000 UTC m=+1178.552672385" lastFinishedPulling="2025-11-26 13:34:54.301412809 +0000 UTC m=+1181.287723824" observedRunningTime="2025-11-26 13:34:55.536642535 +0000 UTC m=+1182.522953550" watchObservedRunningTime="2025-11-26 13:34:55.539551717 +0000 UTC m=+1182.525862732" Nov 26 13:34:56 crc kubenswrapper[4747]: I1126 13:34:56.533623 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-gjgrg" event={"ID":"29be9301-9c9c-4153-844a-8d729443beb3","Type":"ContainerStarted","Data":"1793e0c957ad5c9e64ef27b159ca7014f1af4b8da3f485ea429d080dec990264"} Nov 26 13:34:59 crc kubenswrapper[4747]: I1126 13:34:59.610883 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-gjgrg" podStartSLOduration=4.482382208 podStartE2EDuration="5.610860176s" podCreationTimestamp="2025-11-26 13:34:54 +0000 UTC" firstStartedPulling="2025-11-26 13:34:54.828465657 +0000 UTC m=+1181.814776682" lastFinishedPulling="2025-11-26 13:34:55.956943635 +0000 UTC m=+1182.943254650" observedRunningTime="2025-11-26 13:34:56.554505171 +0000 UTC m=+1183.540816206" watchObservedRunningTime="2025-11-26 13:34:59.610860176 +0000 UTC m=+1186.597171231" Nov 26 13:34:59 crc kubenswrapper[4747]: I1126 13:34:59.619880 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-gjgrg"] Nov 26 13:34:59 crc kubenswrapper[4747]: I1126 13:34:59.620222 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-index-gjgrg" podUID="29be9301-9c9c-4153-844a-8d729443beb3" containerName="registry-server" containerID="cri-o://1793e0c957ad5c9e64ef27b159ca7014f1af4b8da3f485ea429d080dec990264" gracePeriod=2 Nov 26 13:35:00 crc kubenswrapper[4747]: I1126 13:35:00.424496 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-s8hf9"] Nov 26 13:35:00 crc kubenswrapper[4747]: I1126 13:35:00.425477 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-s8hf9" Nov 26 13:35:00 crc kubenswrapper[4747]: I1126 13:35:00.431374 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-s8hf9"] Nov 26 13:35:00 crc kubenswrapper[4747]: I1126 13:35:00.502763 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mpxp\" (UniqueName: \"kubernetes.io/projected/45a71a83-66e4-445d-8ec2-b2aaee755942-kube-api-access-4mpxp\") pod \"swift-operator-index-s8hf9\" (UID: \"45a71a83-66e4-445d-8ec2-b2aaee755942\") " pod="openstack-operators/swift-operator-index-s8hf9" Nov 26 13:35:00 crc kubenswrapper[4747]: I1126 13:35:00.566685 4747 generic.go:334] "Generic (PLEG): container finished" podID="29be9301-9c9c-4153-844a-8d729443beb3" containerID="1793e0c957ad5c9e64ef27b159ca7014f1af4b8da3f485ea429d080dec990264" exitCode=0 Nov 26 13:35:00 crc kubenswrapper[4747]: I1126 13:35:00.566733 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-gjgrg" event={"ID":"29be9301-9c9c-4153-844a-8d729443beb3","Type":"ContainerDied","Data":"1793e0c957ad5c9e64ef27b159ca7014f1af4b8da3f485ea429d080dec990264"} Nov 26 13:35:00 crc kubenswrapper[4747]: I1126 13:35:00.604995 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mpxp\" (UniqueName: \"kubernetes.io/projected/45a71a83-66e4-445d-8ec2-b2aaee755942-kube-api-access-4mpxp\") pod \"swift-operator-index-s8hf9\" (UID: \"45a71a83-66e4-445d-8ec2-b2aaee755942\") " pod="openstack-operators/swift-operator-index-s8hf9" Nov 26 13:35:00 crc kubenswrapper[4747]: I1126 13:35:00.643637 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mpxp\" (UniqueName: \"kubernetes.io/projected/45a71a83-66e4-445d-8ec2-b2aaee755942-kube-api-access-4mpxp\") pod \"swift-operator-index-s8hf9\" (UID: \"45a71a83-66e4-445d-8ec2-b2aaee755942\") " pod="openstack-operators/swift-operator-index-s8hf9" Nov 26 13:35:00 crc kubenswrapper[4747]: I1126 13:35:00.755864 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-s8hf9" Nov 26 13:35:01 crc kubenswrapper[4747]: I1126 13:35:01.138411 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-index-8zsmn" Nov 26 13:35:01 crc kubenswrapper[4747]: I1126 13:35:01.138476 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/horizon-operator-index-8zsmn" Nov 26 13:35:01 crc kubenswrapper[4747]: I1126 13:35:01.169426 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/horizon-operator-index-8zsmn" Nov 26 13:35:01 crc kubenswrapper[4747]: I1126 13:35:01.621155 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-index-8zsmn" Nov 26 13:35:04 crc kubenswrapper[4747]: I1126 13:35:04.370467 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-gjgrg" Nov 26 13:35:04 crc kubenswrapper[4747]: I1126 13:35:04.797546 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-s8hf9"] Nov 26 13:35:04 crc kubenswrapper[4747]: I1126 13:35:04.818392 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-gjgrg" Nov 26 13:35:04 crc kubenswrapper[4747]: I1126 13:35:04.862560 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm7vv\" (UniqueName: \"kubernetes.io/projected/29be9301-9c9c-4153-844a-8d729443beb3-kube-api-access-qm7vv\") pod \"29be9301-9c9c-4153-844a-8d729443beb3\" (UID: \"29be9301-9c9c-4153-844a-8d729443beb3\") " Nov 26 13:35:04 crc kubenswrapper[4747]: I1126 13:35:04.871136 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29be9301-9c9c-4153-844a-8d729443beb3-kube-api-access-qm7vv" (OuterVolumeSpecName: "kube-api-access-qm7vv") pod "29be9301-9c9c-4153-844a-8d729443beb3" (UID: "29be9301-9c9c-4153-844a-8d729443beb3"). InnerVolumeSpecName "kube-api-access-qm7vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:35:04 crc kubenswrapper[4747]: I1126 13:35:04.965152 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm7vv\" (UniqueName: \"kubernetes.io/projected/29be9301-9c9c-4153-844a-8d729443beb3-kube-api-access-qm7vv\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:05 crc kubenswrapper[4747]: I1126 13:35:05.633215 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-s8hf9" event={"ID":"45a71a83-66e4-445d-8ec2-b2aaee755942","Type":"ContainerStarted","Data":"fe8a0cf51afe465d2644f567682c035897cca4c902de6f7a45d4a1a029c902d2"} Nov 26 13:35:05 crc kubenswrapper[4747]: I1126 13:35:05.634507 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-gjgrg" event={"ID":"29be9301-9c9c-4153-844a-8d729443beb3","Type":"ContainerDied","Data":"5e5e01bdcfade0a6d328027f77c6de8a371aac6c3f7beb6557e5ab10e86cb5a1"} Nov 26 13:35:05 crc kubenswrapper[4747]: I1126 13:35:05.634533 4747 scope.go:117] "RemoveContainer" containerID="1793e0c957ad5c9e64ef27b159ca7014f1af4b8da3f485ea429d080dec990264" Nov 26 13:35:05 crc kubenswrapper[4747]: I1126 13:35:05.634638 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-gjgrg" Nov 26 13:35:05 crc kubenswrapper[4747]: I1126 13:35:05.639040 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-62sk4" event={"ID":"81d595c7-cabd-4c96-a992-a417f16b449b","Type":"ContainerStarted","Data":"1c057fe2f7e648b74dacd4c71a4bb64afcbaffb9fc2fa8d1ebb317ae780a138a"} Nov 26 13:35:05 crc kubenswrapper[4747]: I1126 13:35:05.657628 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-db-sync-62sk4" podStartSLOduration=1.942809751 podStartE2EDuration="12.657610562s" podCreationTimestamp="2025-11-26 13:34:53 +0000 UTC" firstStartedPulling="2025-11-26 13:34:54.300724892 +0000 UTC m=+1181.287035917" lastFinishedPulling="2025-11-26 13:35:05.015525713 +0000 UTC m=+1192.001836728" observedRunningTime="2025-11-26 13:35:05.657156551 +0000 UTC m=+1192.643467566" watchObservedRunningTime="2025-11-26 13:35:05.657610562 +0000 UTC m=+1192.643921577" Nov 26 13:35:05 crc kubenswrapper[4747]: I1126 13:35:05.678091 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-gjgrg"] Nov 26 13:35:05 crc kubenswrapper[4747]: I1126 13:35:05.684366 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-index-gjgrg"] Nov 26 13:35:05 crc kubenswrapper[4747]: I1126 13:35:05.807922 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29be9301-9c9c-4153-844a-8d729443beb3" path="/var/lib/kubelet/pods/29be9301-9c9c-4153-844a-8d729443beb3/volumes" Nov 26 13:35:06 crc kubenswrapper[4747]: I1126 13:35:06.647655 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-s8hf9" event={"ID":"45a71a83-66e4-445d-8ec2-b2aaee755942","Type":"ContainerStarted","Data":"c9b82e3344aeaae6fba09829c5fa73fba2d928f3a6de941cca17b27421f175c2"} Nov 26 13:35:06 crc kubenswrapper[4747]: I1126 13:35:06.664565 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-s8hf9" podStartSLOduration=5.683209626 podStartE2EDuration="6.664547437s" podCreationTimestamp="2025-11-26 13:35:00 +0000 UTC" firstStartedPulling="2025-11-26 13:35:04.974243499 +0000 UTC m=+1191.960554514" lastFinishedPulling="2025-11-26 13:35:05.95558131 +0000 UTC m=+1192.941892325" observedRunningTime="2025-11-26 13:35:06.662023895 +0000 UTC m=+1193.648334910" watchObservedRunningTime="2025-11-26 13:35:06.664547437 +0000 UTC m=+1193.650858452" Nov 26 13:35:10 crc kubenswrapper[4747]: I1126 13:35:10.757369 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-s8hf9" Nov 26 13:35:10 crc kubenswrapper[4747]: I1126 13:35:10.759103 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-s8hf9" Nov 26 13:35:10 crc kubenswrapper[4747]: I1126 13:35:10.793090 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-s8hf9" Nov 26 13:35:11 crc kubenswrapper[4747]: I1126 13:35:11.723749 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-s8hf9" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.279092 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh"] Nov 26 13:35:26 crc kubenswrapper[4747]: E1126 13:35:26.280125 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29be9301-9c9c-4153-844a-8d729443beb3" containerName="registry-server" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.280146 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="29be9301-9c9c-4153-844a-8d729443beb3" containerName="registry-server" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.280359 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="29be9301-9c9c-4153-844a-8d729443beb3" containerName="registry-server" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.281697 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.283412 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rm7vv" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.296735 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh"] Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.461001 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7cnf\" (UniqueName: \"kubernetes.io/projected/94358332-1a8d-4750-8383-8c05da245874-kube-api-access-d7cnf\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.461146 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-bundle\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.461281 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-util\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.562741 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-util\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.562863 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7cnf\" (UniqueName: \"kubernetes.io/projected/94358332-1a8d-4750-8383-8c05da245874-kube-api-access-d7cnf\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.562902 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-bundle\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.563658 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-bundle\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.563831 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-util\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.581461 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7cnf\" (UniqueName: \"kubernetes.io/projected/94358332-1a8d-4750-8383-8c05da245874-kube-api-access-d7cnf\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.599879 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:26 crc kubenswrapper[4747]: W1126 13:35:26.997492 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94358332_1a8d_4750_8383_8c05da245874.slice/crio-e879b1dda3fc47c81bacfb18fbc25c361b1bbf7d89fc435729d87d85541396ed WatchSource:0}: Error finding container e879b1dda3fc47c81bacfb18fbc25c361b1bbf7d89fc435729d87d85541396ed: Status 404 returned error can't find the container with id e879b1dda3fc47c81bacfb18fbc25c361b1bbf7d89fc435729d87d85541396ed Nov 26 13:35:26 crc kubenswrapper[4747]: I1126 13:35:26.997623 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh"] Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.063702 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5"] Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.065179 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.097044 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5"] Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.171799 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-util\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.172181 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnrg\" (UniqueName: \"kubernetes.io/projected/f7ce603f-cc97-45cd-afb6-c8397b5688e6-kube-api-access-bbnrg\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.172386 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-bundle\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.273943 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-util\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.274025 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbnrg\" (UniqueName: \"kubernetes.io/projected/f7ce603f-cc97-45cd-afb6-c8397b5688e6-kube-api-access-bbnrg\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.274112 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-bundle\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.274581 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-bundle\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.274819 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-util\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.292660 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbnrg\" (UniqueName: \"kubernetes.io/projected/f7ce603f-cc97-45cd-afb6-c8397b5688e6-kube-api-access-bbnrg\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.399700 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.810181 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" event={"ID":"94358332-1a8d-4750-8383-8c05da245874","Type":"ContainerStarted","Data":"e879b1dda3fc47c81bacfb18fbc25c361b1bbf7d89fc435729d87d85541396ed"} Nov 26 13:35:27 crc kubenswrapper[4747]: I1126 13:35:27.861783 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5"] Nov 26 13:35:28 crc kubenswrapper[4747]: I1126 13:35:28.835664 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" event={"ID":"f7ce603f-cc97-45cd-afb6-c8397b5688e6","Type":"ContainerStarted","Data":"0390a38f0fbeb70ed828d54996d6f55c9f4ad348d48b0d75911889b98e3e9b47"} Nov 26 13:35:29 crc kubenswrapper[4747]: I1126 13:35:29.844744 4747 generic.go:334] "Generic (PLEG): container finished" podID="94358332-1a8d-4750-8383-8c05da245874" containerID="5d1e255135773eaba8fc815f8a134e2e4dcf5bf02d9503839d038470b72ee861" exitCode=0 Nov 26 13:35:29 crc kubenswrapper[4747]: I1126 13:35:29.844810 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" event={"ID":"94358332-1a8d-4750-8383-8c05da245874","Type":"ContainerDied","Data":"5d1e255135773eaba8fc815f8a134e2e4dcf5bf02d9503839d038470b72ee861"} Nov 26 13:35:29 crc kubenswrapper[4747]: I1126 13:35:29.846839 4747 generic.go:334] "Generic (PLEG): container finished" podID="f7ce603f-cc97-45cd-afb6-c8397b5688e6" containerID="1444e718dc3638dd100f492286f874ecc5c3008832a1da263f7f75e42f717a9e" exitCode=0 Nov 26 13:35:29 crc kubenswrapper[4747]: I1126 13:35:29.846877 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" event={"ID":"f7ce603f-cc97-45cd-afb6-c8397b5688e6","Type":"ContainerDied","Data":"1444e718dc3638dd100f492286f874ecc5c3008832a1da263f7f75e42f717a9e"} Nov 26 13:35:29 crc kubenswrapper[4747]: I1126 13:35:29.847838 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 13:35:33 crc kubenswrapper[4747]: I1126 13:35:33.880212 4747 generic.go:334] "Generic (PLEG): container finished" podID="f7ce603f-cc97-45cd-afb6-c8397b5688e6" containerID="9b4d18fd57f887942e239d7c1cde7b386f3fcc8dd27c70d2d875b05a295ce245" exitCode=0 Nov 26 13:35:33 crc kubenswrapper[4747]: I1126 13:35:33.880461 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" event={"ID":"f7ce603f-cc97-45cd-afb6-c8397b5688e6","Type":"ContainerDied","Data":"9b4d18fd57f887942e239d7c1cde7b386f3fcc8dd27c70d2d875b05a295ce245"} Nov 26 13:35:34 crc kubenswrapper[4747]: I1126 13:35:34.889569 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" event={"ID":"f7ce603f-cc97-45cd-afb6-c8397b5688e6","Type":"ContainerStarted","Data":"b647adaac670168bcb56d23f65ec5991d88102013417ed16e79cb1eeb0270f9a"} Nov 26 13:35:34 crc kubenswrapper[4747]: I1126 13:35:34.911739 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" podStartSLOduration=4.652247962 podStartE2EDuration="7.911038047s" podCreationTimestamp="2025-11-26 13:35:27 +0000 UTC" firstStartedPulling="2025-11-26 13:35:29.848542634 +0000 UTC m=+1216.834853659" lastFinishedPulling="2025-11-26 13:35:33.107332719 +0000 UTC m=+1220.093643744" observedRunningTime="2025-11-26 13:35:34.908941555 +0000 UTC m=+1221.895252580" watchObservedRunningTime="2025-11-26 13:35:34.911038047 +0000 UTC m=+1221.897349092" Nov 26 13:35:35 crc kubenswrapper[4747]: I1126 13:35:35.901344 4747 generic.go:334] "Generic (PLEG): container finished" podID="f7ce603f-cc97-45cd-afb6-c8397b5688e6" containerID="b647adaac670168bcb56d23f65ec5991d88102013417ed16e79cb1eeb0270f9a" exitCode=0 Nov 26 13:35:35 crc kubenswrapper[4747]: I1126 13:35:35.901488 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" event={"ID":"f7ce603f-cc97-45cd-afb6-c8397b5688e6","Type":"ContainerDied","Data":"b647adaac670168bcb56d23f65ec5991d88102013417ed16e79cb1eeb0270f9a"} Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.187016 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.350429 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbnrg\" (UniqueName: \"kubernetes.io/projected/f7ce603f-cc97-45cd-afb6-c8397b5688e6-kube-api-access-bbnrg\") pod \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.351347 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-util\") pod \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.351409 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-bundle\") pod \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\" (UID: \"f7ce603f-cc97-45cd-afb6-c8397b5688e6\") " Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.351972 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-bundle" (OuterVolumeSpecName: "bundle") pod "f7ce603f-cc97-45cd-afb6-c8397b5688e6" (UID: "f7ce603f-cc97-45cd-afb6-c8397b5688e6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.363273 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ce603f-cc97-45cd-afb6-c8397b5688e6-kube-api-access-bbnrg" (OuterVolumeSpecName: "kube-api-access-bbnrg") pod "f7ce603f-cc97-45cd-afb6-c8397b5688e6" (UID: "f7ce603f-cc97-45cd-afb6-c8397b5688e6"). InnerVolumeSpecName "kube-api-access-bbnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.381652 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-util" (OuterVolumeSpecName: "util") pod "f7ce603f-cc97-45cd-afb6-c8397b5688e6" (UID: "f7ce603f-cc97-45cd-afb6-c8397b5688e6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.452237 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbnrg\" (UniqueName: \"kubernetes.io/projected/f7ce603f-cc97-45cd-afb6-c8397b5688e6-kube-api-access-bbnrg\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.452268 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.452279 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7ce603f-cc97-45cd-afb6-c8397b5688e6-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.921165 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" event={"ID":"f7ce603f-cc97-45cd-afb6-c8397b5688e6","Type":"ContainerDied","Data":"0390a38f0fbeb70ed828d54996d6f55c9f4ad348d48b0d75911889b98e3e9b47"} Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.921759 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0390a38f0fbeb70ed828d54996d6f55c9f4ad348d48b0d75911889b98e3e9b47" Nov 26 13:35:37 crc kubenswrapper[4747]: I1126 13:35:37.921254 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5" Nov 26 13:35:38 crc kubenswrapper[4747]: I1126 13:35:38.929881 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" event={"ID":"94358332-1a8d-4750-8383-8c05da245874","Type":"ContainerStarted","Data":"1071772d61c2b02f8251632519475e5ae07649db14f5ae58e77d2f63e82578fa"} Nov 26 13:35:39 crc kubenswrapper[4747]: I1126 13:35:39.936952 4747 generic.go:334] "Generic (PLEG): container finished" podID="94358332-1a8d-4750-8383-8c05da245874" containerID="1071772d61c2b02f8251632519475e5ae07649db14f5ae58e77d2f63e82578fa" exitCode=0 Nov 26 13:35:39 crc kubenswrapper[4747]: I1126 13:35:39.937010 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" event={"ID":"94358332-1a8d-4750-8383-8c05da245874","Type":"ContainerDied","Data":"1071772d61c2b02f8251632519475e5ae07649db14f5ae58e77d2f63e82578fa"} Nov 26 13:35:40 crc kubenswrapper[4747]: I1126 13:35:40.950505 4747 generic.go:334] "Generic (PLEG): container finished" podID="94358332-1a8d-4750-8383-8c05da245874" containerID="dc3b23fa14cb3cf47708493cf782772328e6e80509a0fa65ba5c065ebaa0d1bc" exitCode=0 Nov 26 13:35:40 crc kubenswrapper[4747]: I1126 13:35:40.950612 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" event={"ID":"94358332-1a8d-4750-8383-8c05da245874","Type":"ContainerDied","Data":"dc3b23fa14cb3cf47708493cf782772328e6e80509a0fa65ba5c065ebaa0d1bc"} Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.247112 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.425686 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-util\") pod \"94358332-1a8d-4750-8383-8c05da245874\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.427226 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cnf\" (UniqueName: \"kubernetes.io/projected/94358332-1a8d-4750-8383-8c05da245874-kube-api-access-d7cnf\") pod \"94358332-1a8d-4750-8383-8c05da245874\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.427300 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-bundle\") pod \"94358332-1a8d-4750-8383-8c05da245874\" (UID: \"94358332-1a8d-4750-8383-8c05da245874\") " Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.428928 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-bundle" (OuterVolumeSpecName: "bundle") pod "94358332-1a8d-4750-8383-8c05da245874" (UID: "94358332-1a8d-4750-8383-8c05da245874"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.437180 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-util" (OuterVolumeSpecName: "util") pod "94358332-1a8d-4750-8383-8c05da245874" (UID: "94358332-1a8d-4750-8383-8c05da245874"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.440255 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94358332-1a8d-4750-8383-8c05da245874-kube-api-access-d7cnf" (OuterVolumeSpecName: "kube-api-access-d7cnf") pod "94358332-1a8d-4750-8383-8c05da245874" (UID: "94358332-1a8d-4750-8383-8c05da245874"). InnerVolumeSpecName "kube-api-access-d7cnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.529398 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.529430 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7cnf\" (UniqueName: \"kubernetes.io/projected/94358332-1a8d-4750-8383-8c05da245874-kube-api-access-d7cnf\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.529442 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/94358332-1a8d-4750-8383-8c05da245874-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.980303 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" event={"ID":"94358332-1a8d-4750-8383-8c05da245874","Type":"ContainerDied","Data":"e879b1dda3fc47c81bacfb18fbc25c361b1bbf7d89fc435729d87d85541396ed"} Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.980381 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e879b1dda3fc47c81bacfb18fbc25c361b1bbf7d89fc435729d87d85541396ed" Nov 26 13:35:42 crc kubenswrapper[4747]: I1126 13:35:42.980383 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh" Nov 26 13:35:49 crc kubenswrapper[4747]: I1126 13:35:49.018263 4747 generic.go:334] "Generic (PLEG): container finished" podID="81d595c7-cabd-4c96-a992-a417f16b449b" containerID="1c057fe2f7e648b74dacd4c71a4bb64afcbaffb9fc2fa8d1ebb317ae780a138a" exitCode=0 Nov 26 13:35:49 crc kubenswrapper[4747]: I1126 13:35:49.018367 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-62sk4" event={"ID":"81d595c7-cabd-4c96-a992-a417f16b449b","Type":"ContainerDied","Data":"1c057fe2f7e648b74dacd4c71a4bb64afcbaffb9fc2fa8d1ebb317ae780a138a"} Nov 26 13:35:50 crc kubenswrapper[4747]: I1126 13:35:50.327463 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-62sk4" Nov 26 13:35:50 crc kubenswrapper[4747]: I1126 13:35:50.458190 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zgxk\" (UniqueName: \"kubernetes.io/projected/81d595c7-cabd-4c96-a992-a417f16b449b-kube-api-access-2zgxk\") pod \"81d595c7-cabd-4c96-a992-a417f16b449b\" (UID: \"81d595c7-cabd-4c96-a992-a417f16b449b\") " Nov 26 13:35:50 crc kubenswrapper[4747]: I1126 13:35:50.458291 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d595c7-cabd-4c96-a992-a417f16b449b-config-data\") pod \"81d595c7-cabd-4c96-a992-a417f16b449b\" (UID: \"81d595c7-cabd-4c96-a992-a417f16b449b\") " Nov 26 13:35:50 crc kubenswrapper[4747]: I1126 13:35:50.464207 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d595c7-cabd-4c96-a992-a417f16b449b-kube-api-access-2zgxk" (OuterVolumeSpecName: "kube-api-access-2zgxk") pod "81d595c7-cabd-4c96-a992-a417f16b449b" (UID: "81d595c7-cabd-4c96-a992-a417f16b449b"). InnerVolumeSpecName "kube-api-access-2zgxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:35:50 crc kubenswrapper[4747]: I1126 13:35:50.496012 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d595c7-cabd-4c96-a992-a417f16b449b-config-data" (OuterVolumeSpecName: "config-data") pod "81d595c7-cabd-4c96-a992-a417f16b449b" (UID: "81d595c7-cabd-4c96-a992-a417f16b449b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:35:50 crc kubenswrapper[4747]: I1126 13:35:50.559477 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zgxk\" (UniqueName: \"kubernetes.io/projected/81d595c7-cabd-4c96-a992-a417f16b449b-kube-api-access-2zgxk\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:50 crc kubenswrapper[4747]: I1126 13:35:50.559510 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d595c7-cabd-4c96-a992-a417f16b449b-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.033310 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-62sk4" event={"ID":"81d595c7-cabd-4c96-a992-a417f16b449b","Type":"ContainerDied","Data":"bd4300afa1c96c53cc625892302c9e31c8b534906248b8cb16e432d95732f33b"} Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.033596 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4300afa1c96c53cc625892302c9e31c8b534906248b8cb16e432d95732f33b" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.033387 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-62sk4" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.234923 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g"] Nov 26 13:35:51 crc kubenswrapper[4747]: E1126 13:35:51.235266 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94358332-1a8d-4750-8383-8c05da245874" containerName="util" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.235288 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="94358332-1a8d-4750-8383-8c05da245874" containerName="util" Nov 26 13:35:51 crc kubenswrapper[4747]: E1126 13:35:51.235308 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ce603f-cc97-45cd-afb6-c8397b5688e6" containerName="extract" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.235317 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ce603f-cc97-45cd-afb6-c8397b5688e6" containerName="extract" Nov 26 13:35:51 crc kubenswrapper[4747]: E1126 13:35:51.235330 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ce603f-cc97-45cd-afb6-c8397b5688e6" containerName="pull" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.235338 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ce603f-cc97-45cd-afb6-c8397b5688e6" containerName="pull" Nov 26 13:35:51 crc kubenswrapper[4747]: E1126 13:35:51.235348 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d595c7-cabd-4c96-a992-a417f16b449b" containerName="keystone-db-sync" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.235356 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d595c7-cabd-4c96-a992-a417f16b449b" containerName="keystone-db-sync" Nov 26 13:35:51 crc kubenswrapper[4747]: E1126 13:35:51.235373 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ce603f-cc97-45cd-afb6-c8397b5688e6" containerName="util" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.235383 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ce603f-cc97-45cd-afb6-c8397b5688e6" containerName="util" Nov 26 13:35:51 crc kubenswrapper[4747]: E1126 13:35:51.235400 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94358332-1a8d-4750-8383-8c05da245874" containerName="pull" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.235408 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="94358332-1a8d-4750-8383-8c05da245874" containerName="pull" Nov 26 13:35:51 crc kubenswrapper[4747]: E1126 13:35:51.235419 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94358332-1a8d-4750-8383-8c05da245874" containerName="extract" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.235426 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="94358332-1a8d-4750-8383-8c05da245874" containerName="extract" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.235570 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ce603f-cc97-45cd-afb6-c8397b5688e6" containerName="extract" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.235582 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="94358332-1a8d-4750-8383-8c05da245874" containerName="extract" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.235595 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d595c7-cabd-4c96-a992-a417f16b449b" containerName="keystone-db-sync" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.236127 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.238592 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.248221 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kx52s" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.274867 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlsm5\" (UniqueName: \"kubernetes.io/projected/55113f3f-9d43-4598-8b6b-34ec136691d8-kube-api-access-hlsm5\") pod \"swift-operator-controller-manager-7fd68b5878-x5z6g\" (UID: \"55113f3f-9d43-4598-8b6b-34ec136691d8\") " pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.274990 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55113f3f-9d43-4598-8b6b-34ec136691d8-webhook-cert\") pod \"swift-operator-controller-manager-7fd68b5878-x5z6g\" (UID: \"55113f3f-9d43-4598-8b6b-34ec136691d8\") " pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.275091 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55113f3f-9d43-4598-8b6b-34ec136691d8-apiservice-cert\") pod \"swift-operator-controller-manager-7fd68b5878-x5z6g\" (UID: \"55113f3f-9d43-4598-8b6b-34ec136691d8\") " pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.306615 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g"] Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.321401 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-79j8m"] Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.322173 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.327508 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"osp-secret" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.327846 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-4klb9" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.328003 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.327508 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.328230 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.332764 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-79j8m"] Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.376527 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-fernet-keys\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.376590 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55113f3f-9d43-4598-8b6b-34ec136691d8-webhook-cert\") pod \"swift-operator-controller-manager-7fd68b5878-x5z6g\" (UID: \"55113f3f-9d43-4598-8b6b-34ec136691d8\") " pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.376608 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-scripts\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.376642 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-config-data\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.376692 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55113f3f-9d43-4598-8b6b-34ec136691d8-apiservice-cert\") pod \"swift-operator-controller-manager-7fd68b5878-x5z6g\" (UID: \"55113f3f-9d43-4598-8b6b-34ec136691d8\") " pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.376729 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlsm5\" (UniqueName: \"kubernetes.io/projected/55113f3f-9d43-4598-8b6b-34ec136691d8-kube-api-access-hlsm5\") pod \"swift-operator-controller-manager-7fd68b5878-x5z6g\" (UID: \"55113f3f-9d43-4598-8b6b-34ec136691d8\") " pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.376761 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-credential-keys\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.376797 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpkb4\" (UniqueName: \"kubernetes.io/projected/2d731e42-04bc-46f7-b946-fa91ee2b54e2-kube-api-access-rpkb4\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.382506 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55113f3f-9d43-4598-8b6b-34ec136691d8-webhook-cert\") pod \"swift-operator-controller-manager-7fd68b5878-x5z6g\" (UID: \"55113f3f-9d43-4598-8b6b-34ec136691d8\") " pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.384177 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55113f3f-9d43-4598-8b6b-34ec136691d8-apiservice-cert\") pod \"swift-operator-controller-manager-7fd68b5878-x5z6g\" (UID: \"55113f3f-9d43-4598-8b6b-34ec136691d8\") " pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.393771 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlsm5\" (UniqueName: \"kubernetes.io/projected/55113f3f-9d43-4598-8b6b-34ec136691d8-kube-api-access-hlsm5\") pod \"swift-operator-controller-manager-7fd68b5878-x5z6g\" (UID: \"55113f3f-9d43-4598-8b6b-34ec136691d8\") " pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.478123 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-fernet-keys\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.478181 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-scripts\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.478207 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-config-data\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.478248 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-credential-keys\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.478272 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpkb4\" (UniqueName: \"kubernetes.io/projected/2d731e42-04bc-46f7-b946-fa91ee2b54e2-kube-api-access-rpkb4\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.481653 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-fernet-keys\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.481780 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-config-data\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.481979 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-scripts\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.482073 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-credential-keys\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.512575 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpkb4\" (UniqueName: \"kubernetes.io/projected/2d731e42-04bc-46f7-b946-fa91ee2b54e2-kube-api-access-rpkb4\") pod \"keystone-bootstrap-79j8m\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.552104 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.636440 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:51 crc kubenswrapper[4747]: I1126 13:35:51.957160 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g"] Nov 26 13:35:52 crc kubenswrapper[4747]: I1126 13:35:52.040830 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" event={"ID":"55113f3f-9d43-4598-8b6b-34ec136691d8","Type":"ContainerStarted","Data":"c94d64fd2b11cf4c8cf1e6fc59e38081ef26f4925a783689a8b5d36592311379"} Nov 26 13:35:52 crc kubenswrapper[4747]: I1126 13:35:52.100743 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-79j8m"] Nov 26 13:35:52 crc kubenswrapper[4747]: W1126 13:35:52.105997 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d731e42_04bc_46f7_b946_fa91ee2b54e2.slice/crio-101da6e6c27de33949956e7957c1b0e2c99d4e205cc06845231b06aadc031e2a WatchSource:0}: Error finding container 101da6e6c27de33949956e7957c1b0e2c99d4e205cc06845231b06aadc031e2a: Status 404 returned error can't find the container with id 101da6e6c27de33949956e7957c1b0e2c99d4e205cc06845231b06aadc031e2a Nov 26 13:35:53 crc kubenswrapper[4747]: I1126 13:35:53.054858 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-79j8m" event={"ID":"2d731e42-04bc-46f7-b946-fa91ee2b54e2","Type":"ContainerStarted","Data":"e9ab3bfabd07fcdf073154d0daab727a7bd226a9cfb6d4357df97ff823cb8de8"} Nov 26 13:35:53 crc kubenswrapper[4747]: I1126 13:35:53.055276 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-79j8m" event={"ID":"2d731e42-04bc-46f7-b946-fa91ee2b54e2","Type":"ContainerStarted","Data":"101da6e6c27de33949956e7957c1b0e2c99d4e205cc06845231b06aadc031e2a"} Nov 26 13:35:53 crc kubenswrapper[4747]: I1126 13:35:53.074888 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-bootstrap-79j8m" podStartSLOduration=2.074868981 podStartE2EDuration="2.074868981s" podCreationTimestamp="2025-11-26 13:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:35:53.073657873 +0000 UTC m=+1240.059968898" watchObservedRunningTime="2025-11-26 13:35:53.074868981 +0000 UTC m=+1240.061180006" Nov 26 13:35:55 crc kubenswrapper[4747]: I1126 13:35:55.069571 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" event={"ID":"55113f3f-9d43-4598-8b6b-34ec136691d8","Type":"ContainerStarted","Data":"9a25f751c487ad7ac0c8a5a55bd1409df18f4858b45a135bf114d42313fefb01"} Nov 26 13:35:55 crc kubenswrapper[4747]: I1126 13:35:55.070008 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:35:55 crc kubenswrapper[4747]: I1126 13:35:55.095701 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" podStartSLOduration=1.454517788 podStartE2EDuration="4.095678401s" podCreationTimestamp="2025-11-26 13:35:51 +0000 UTC" firstStartedPulling="2025-11-26 13:35:51.963659827 +0000 UTC m=+1238.949970842" lastFinishedPulling="2025-11-26 13:35:54.60482044 +0000 UTC m=+1241.591131455" observedRunningTime="2025-11-26 13:35:55.089720952 +0000 UTC m=+1242.076031967" watchObservedRunningTime="2025-11-26 13:35:55.095678401 +0000 UTC m=+1242.081989416" Nov 26 13:35:56 crc kubenswrapper[4747]: I1126 13:35:56.077912 4747 generic.go:334] "Generic (PLEG): container finished" podID="2d731e42-04bc-46f7-b946-fa91ee2b54e2" containerID="e9ab3bfabd07fcdf073154d0daab727a7bd226a9cfb6d4357df97ff823cb8de8" exitCode=0 Nov 26 13:35:56 crc kubenswrapper[4747]: I1126 13:35:56.078038 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-79j8m" event={"ID":"2d731e42-04bc-46f7-b946-fa91ee2b54e2","Type":"ContainerDied","Data":"e9ab3bfabd07fcdf073154d0daab727a7bd226a9cfb6d4357df97ff823cb8de8"} Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.467692 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.662534 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpkb4\" (UniqueName: \"kubernetes.io/projected/2d731e42-04bc-46f7-b946-fa91ee2b54e2-kube-api-access-rpkb4\") pod \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.662622 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-fernet-keys\") pod \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.662654 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-scripts\") pod \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.662702 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-credential-keys\") pod \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.662795 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-config-data\") pod \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\" (UID: \"2d731e42-04bc-46f7-b946-fa91ee2b54e2\") " Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.667618 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2d731e42-04bc-46f7-b946-fa91ee2b54e2" (UID: "2d731e42-04bc-46f7-b946-fa91ee2b54e2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.668154 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d731e42-04bc-46f7-b946-fa91ee2b54e2-kube-api-access-rpkb4" (OuterVolumeSpecName: "kube-api-access-rpkb4") pod "2d731e42-04bc-46f7-b946-fa91ee2b54e2" (UID: "2d731e42-04bc-46f7-b946-fa91ee2b54e2"). InnerVolumeSpecName "kube-api-access-rpkb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.668966 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2d731e42-04bc-46f7-b946-fa91ee2b54e2" (UID: "2d731e42-04bc-46f7-b946-fa91ee2b54e2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.675171 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-scripts" (OuterVolumeSpecName: "scripts") pod "2d731e42-04bc-46f7-b946-fa91ee2b54e2" (UID: "2d731e42-04bc-46f7-b946-fa91ee2b54e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.692411 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-config-data" (OuterVolumeSpecName: "config-data") pod "2d731e42-04bc-46f7-b946-fa91ee2b54e2" (UID: "2d731e42-04bc-46f7-b946-fa91ee2b54e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.764166 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.764209 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpkb4\" (UniqueName: \"kubernetes.io/projected/2d731e42-04bc-46f7-b946-fa91ee2b54e2-kube-api-access-rpkb4\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.764224 4747 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.764235 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:57 crc kubenswrapper[4747]: I1126 13:35:57.764245 4747 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d731e42-04bc-46f7-b946-fa91ee2b54e2-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.094169 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-79j8m" event={"ID":"2d731e42-04bc-46f7-b946-fa91ee2b54e2","Type":"ContainerDied","Data":"101da6e6c27de33949956e7957c1b0e2c99d4e205cc06845231b06aadc031e2a"} Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.094206 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="101da6e6c27de33949956e7957c1b0e2c99d4e205cc06845231b06aadc031e2a" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.094234 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-79j8m" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.173440 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-6587955ddb-c2fhn"] Nov 26 13:35:58 crc kubenswrapper[4747]: E1126 13:35:58.173802 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d731e42-04bc-46f7-b946-fa91ee2b54e2" containerName="keystone-bootstrap" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.173826 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d731e42-04bc-46f7-b946-fa91ee2b54e2" containerName="keystone-bootstrap" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.174003 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d731e42-04bc-46f7-b946-fa91ee2b54e2" containerName="keystone-bootstrap" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.174629 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.177747 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.177871 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.178225 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.178459 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-4klb9" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.189808 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-6587955ddb-c2fhn"] Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.371303 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-fernet-keys\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.371626 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-scripts\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.371742 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-config-data\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.371852 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxqk\" (UniqueName: \"kubernetes.io/projected/9e011df0-47d0-4e00-ad3a-044212d955b1-kube-api-access-4fxqk\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.371987 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-credential-keys\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.473713 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-scripts\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.474509 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-config-data\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.474642 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxqk\" (UniqueName: \"kubernetes.io/projected/9e011df0-47d0-4e00-ad3a-044212d955b1-kube-api-access-4fxqk\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.475417 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-credential-keys\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.475670 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-fernet-keys\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.477169 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-scripts\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.478919 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-config-data\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.479644 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-credential-keys\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.479847 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e011df0-47d0-4e00-ad3a-044212d955b1-fernet-keys\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.504796 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxqk\" (UniqueName: \"kubernetes.io/projected/9e011df0-47d0-4e00-ad3a-044212d955b1-kube-api-access-4fxqk\") pod \"keystone-6587955ddb-c2fhn\" (UID: \"9e011df0-47d0-4e00-ad3a-044212d955b1\") " pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:58 crc kubenswrapper[4747]: I1126 13:35:58.792003 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.229323 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-6587955ddb-c2fhn"] Nov 26 13:35:59 crc kubenswrapper[4747]: W1126 13:35:59.237202 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e011df0_47d0_4e00_ad3a_044212d955b1.slice/crio-cdfe36d325c7e0157453efab7e3bd33c536750523c335342c7cc347f7019097a WatchSource:0}: Error finding container cdfe36d325c7e0157453efab7e3bd33c536750523c335342c7cc347f7019097a: Status 404 returned error can't find the container with id cdfe36d325c7e0157453efab7e3bd33c536750523c335342c7cc347f7019097a Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.518098 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c"] Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.519116 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.521080 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-service-cert" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.521344 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5flm8" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.531495 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c"] Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.690588 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9344af4-6999-4694-8a62-3541837564f0-webhook-cert\") pod \"horizon-operator-controller-manager-56c4598b9-2tw2c\" (UID: \"f9344af4-6999-4694-8a62-3541837564f0\") " pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.690767 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9344af4-6999-4694-8a62-3541837564f0-apiservice-cert\") pod \"horizon-operator-controller-manager-56c4598b9-2tw2c\" (UID: \"f9344af4-6999-4694-8a62-3541837564f0\") " pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.690815 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-582n5\" (UniqueName: \"kubernetes.io/projected/f9344af4-6999-4694-8a62-3541837564f0-kube-api-access-582n5\") pod \"horizon-operator-controller-manager-56c4598b9-2tw2c\" (UID: \"f9344af4-6999-4694-8a62-3541837564f0\") " pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.791689 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9344af4-6999-4694-8a62-3541837564f0-apiservice-cert\") pod \"horizon-operator-controller-manager-56c4598b9-2tw2c\" (UID: \"f9344af4-6999-4694-8a62-3541837564f0\") " pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.792120 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-582n5\" (UniqueName: \"kubernetes.io/projected/f9344af4-6999-4694-8a62-3541837564f0-kube-api-access-582n5\") pod \"horizon-operator-controller-manager-56c4598b9-2tw2c\" (UID: \"f9344af4-6999-4694-8a62-3541837564f0\") " pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.792161 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9344af4-6999-4694-8a62-3541837564f0-webhook-cert\") pod \"horizon-operator-controller-manager-56c4598b9-2tw2c\" (UID: \"f9344af4-6999-4694-8a62-3541837564f0\") " pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.796124 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9344af4-6999-4694-8a62-3541837564f0-apiservice-cert\") pod \"horizon-operator-controller-manager-56c4598b9-2tw2c\" (UID: \"f9344af4-6999-4694-8a62-3541837564f0\") " pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.796617 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9344af4-6999-4694-8a62-3541837564f0-webhook-cert\") pod \"horizon-operator-controller-manager-56c4598b9-2tw2c\" (UID: \"f9344af4-6999-4694-8a62-3541837564f0\") " pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.811636 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-582n5\" (UniqueName: \"kubernetes.io/projected/f9344af4-6999-4694-8a62-3541837564f0-kube-api-access-582n5\") pod \"horizon-operator-controller-manager-56c4598b9-2tw2c\" (UID: \"f9344af4-6999-4694-8a62-3541837564f0\") " pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:35:59 crc kubenswrapper[4747]: I1126 13:35:59.834434 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:36:00 crc kubenswrapper[4747]: I1126 13:36:00.106561 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" event={"ID":"9e011df0-47d0-4e00-ad3a-044212d955b1","Type":"ContainerStarted","Data":"d55f2d046b03f9dd00beb7485ded963a5cdc48b94d21990524d64052e177e5f8"} Nov 26 13:36:00 crc kubenswrapper[4747]: I1126 13:36:00.106608 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" event={"ID":"9e011df0-47d0-4e00-ad3a-044212d955b1","Type":"ContainerStarted","Data":"cdfe36d325c7e0157453efab7e3bd33c536750523c335342c7cc347f7019097a"} Nov 26 13:36:00 crc kubenswrapper[4747]: I1126 13:36:00.106745 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:36:00 crc kubenswrapper[4747]: I1126 13:36:00.131233 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" podStartSLOduration=2.131211898 podStartE2EDuration="2.131211898s" podCreationTimestamp="2025-11-26 13:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:36:00.120744534 +0000 UTC m=+1247.107055589" watchObservedRunningTime="2025-11-26 13:36:00.131211898 +0000 UTC m=+1247.117522913" Nov 26 13:36:00 crc kubenswrapper[4747]: W1126 13:36:00.238199 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9344af4_6999_4694_8a62_3541837564f0.slice/crio-4ec6d88b20024d529c2609f71c899af721c33dd5dfecd614bca7f4735694e486 WatchSource:0}: Error finding container 4ec6d88b20024d529c2609f71c899af721c33dd5dfecd614bca7f4735694e486: Status 404 returned error can't find the container with id 4ec6d88b20024d529c2609f71c899af721c33dd5dfecd614bca7f4735694e486 Nov 26 13:36:00 crc kubenswrapper[4747]: I1126 13:36:00.239984 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c"] Nov 26 13:36:01 crc kubenswrapper[4747]: I1126 13:36:01.113596 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" event={"ID":"f9344af4-6999-4694-8a62-3541837564f0","Type":"ContainerStarted","Data":"4ec6d88b20024d529c2609f71c899af721c33dd5dfecd614bca7f4735694e486"} Nov 26 13:36:01 crc kubenswrapper[4747]: I1126 13:36:01.556573 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7fd68b5878-x5z6g" Nov 26 13:36:04 crc kubenswrapper[4747]: I1126 13:36:04.135414 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" event={"ID":"f9344af4-6999-4694-8a62-3541837564f0","Type":"ContainerStarted","Data":"63f7dabbfdfd84f3554329e06a3e3eb52cd57d0cb9f79f4fc0e1d8519cbf2660"} Nov 26 13:36:04 crc kubenswrapper[4747]: I1126 13:36:04.136755 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:36:04 crc kubenswrapper[4747]: I1126 13:36:04.151727 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" podStartSLOduration=2.023670996 podStartE2EDuration="5.151712387s" podCreationTimestamp="2025-11-26 13:35:59 +0000 UTC" firstStartedPulling="2025-11-26 13:36:00.240871457 +0000 UTC m=+1247.227182472" lastFinishedPulling="2025-11-26 13:36:03.368912848 +0000 UTC m=+1250.355223863" observedRunningTime="2025-11-26 13:36:04.151077292 +0000 UTC m=+1251.137388307" watchObservedRunningTime="2025-11-26 13:36:04.151712387 +0000 UTC m=+1251.138023402" Nov 26 13:36:06 crc kubenswrapper[4747]: I1126 13:36:06.852969 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Nov 26 13:36:06 crc kubenswrapper[4747]: I1126 13:36:06.857559 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:06 crc kubenswrapper[4747]: I1126 13:36:06.860449 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-conf" Nov 26 13:36:06 crc kubenswrapper[4747]: I1126 13:36:06.860765 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-swift-dockercfg-s6s4s" Nov 26 13:36:06 crc kubenswrapper[4747]: I1126 13:36:06.860966 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-storage-config-data" Nov 26 13:36:06 crc kubenswrapper[4747]: I1126 13:36:06.863116 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-files" Nov 26 13:36:06 crc kubenswrapper[4747]: I1126 13:36:06.868670 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.004019 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.004082 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0c003abf-6288-4d54-8c91-07c1eebe0123-lock\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.004109 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0c003abf-6288-4d54-8c91-07c1eebe0123-cache\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.004158 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.004210 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcr7z\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-kube-api-access-lcr7z\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.027211 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-index-4szxh"] Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.028255 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-4szxh" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.029834 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-index-dockercfg-n2qhn" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.038306 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-4szxh"] Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.105672 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0c003abf-6288-4d54-8c91-07c1eebe0123-lock\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.105708 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.105762 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0c003abf-6288-4d54-8c91-07c1eebe0123-cache\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.105802 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.105839 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcr7z\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-kube-api-access-lcr7z\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: E1126 13:36:07.106179 4747 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:36:07 crc kubenswrapper[4747]: E1126 13:36:07.106197 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:36:07 crc kubenswrapper[4747]: E1126 13:36:07.106236 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift podName:0c003abf-6288-4d54-8c91-07c1eebe0123 nodeName:}" failed. No retries permitted until 2025-11-26 13:36:07.606221144 +0000 UTC m=+1254.592532159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift") pod "swift-storage-0" (UID: "0c003abf-6288-4d54-8c91-07c1eebe0123") : configmap "swift-ring-files" not found Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.106434 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") device mount path \"/mnt/openstack/pv19\"" pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.106486 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0c003abf-6288-4d54-8c91-07c1eebe0123-cache\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.106862 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0c003abf-6288-4d54-8c91-07c1eebe0123-lock\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.124446 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcr7z\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-kube-api-access-lcr7z\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.126455 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.207495 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmxk4\" (UniqueName: \"kubernetes.io/projected/9554630a-ba77-46dc-9f92-b2a9591a69f7-kube-api-access-fmxk4\") pod \"glance-operator-index-4szxh\" (UID: \"9554630a-ba77-46dc-9f92-b2a9591a69f7\") " pod="openstack-operators/glance-operator-index-4szxh" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.308587 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmxk4\" (UniqueName: \"kubernetes.io/projected/9554630a-ba77-46dc-9f92-b2a9591a69f7-kube-api-access-fmxk4\") pod \"glance-operator-index-4szxh\" (UID: \"9554630a-ba77-46dc-9f92-b2a9591a69f7\") " pod="openstack-operators/glance-operator-index-4szxh" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.330514 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmxk4\" (UniqueName: \"kubernetes.io/projected/9554630a-ba77-46dc-9f92-b2a9591a69f7-kube-api-access-fmxk4\") pod \"glance-operator-index-4szxh\" (UID: \"9554630a-ba77-46dc-9f92-b2a9591a69f7\") " pod="openstack-operators/glance-operator-index-4szxh" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.346723 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-4szxh" Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.615865 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:07 crc kubenswrapper[4747]: E1126 13:36:07.616079 4747 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:36:07 crc kubenswrapper[4747]: E1126 13:36:07.616570 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:36:07 crc kubenswrapper[4747]: E1126 13:36:07.616693 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift podName:0c003abf-6288-4d54-8c91-07c1eebe0123 nodeName:}" failed. No retries permitted until 2025-11-26 13:36:08.61661084 +0000 UTC m=+1255.602921855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift") pod "swift-storage-0" (UID: "0c003abf-6288-4d54-8c91-07c1eebe0123") : configmap "swift-ring-files" not found Nov 26 13:36:07 crc kubenswrapper[4747]: I1126 13:36:07.899817 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-4szxh"] Nov 26 13:36:07 crc kubenswrapper[4747]: W1126 13:36:07.902877 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9554630a_ba77_46dc_9f92_b2a9591a69f7.slice/crio-607f9694177fceb7dd8edef358e8179116805aa83380c5dd64d7881eec7e1f2f WatchSource:0}: Error finding container 607f9694177fceb7dd8edef358e8179116805aa83380c5dd64d7881eec7e1f2f: Status 404 returned error can't find the container with id 607f9694177fceb7dd8edef358e8179116805aa83380c5dd64d7881eec7e1f2f Nov 26 13:36:08 crc kubenswrapper[4747]: I1126 13:36:08.164305 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-4szxh" event={"ID":"9554630a-ba77-46dc-9f92-b2a9591a69f7","Type":"ContainerStarted","Data":"607f9694177fceb7dd8edef358e8179116805aa83380c5dd64d7881eec7e1f2f"} Nov 26 13:36:08 crc kubenswrapper[4747]: I1126 13:36:08.630227 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:08 crc kubenswrapper[4747]: E1126 13:36:08.630559 4747 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:36:08 crc kubenswrapper[4747]: E1126 13:36:08.630931 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:36:08 crc kubenswrapper[4747]: E1126 13:36:08.631050 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift podName:0c003abf-6288-4d54-8c91-07c1eebe0123 nodeName:}" failed. No retries permitted until 2025-11-26 13:36:10.631012743 +0000 UTC m=+1257.617323798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift") pod "swift-storage-0" (UID: "0c003abf-6288-4d54-8c91-07c1eebe0123") : configmap "swift-ring-files" not found Nov 26 13:36:09 crc kubenswrapper[4747]: I1126 13:36:09.839127 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-56c4598b9-2tw2c" Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.659154 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:10 crc kubenswrapper[4747]: E1126 13:36:10.659356 4747 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:36:10 crc kubenswrapper[4747]: E1126 13:36:10.659394 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:36:10 crc kubenswrapper[4747]: E1126 13:36:10.659580 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift podName:0c003abf-6288-4d54-8c91-07c1eebe0123 nodeName:}" failed. No retries permitted until 2025-11-26 13:36:14.659517051 +0000 UTC m=+1261.645828066 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift") pod "swift-storage-0" (UID: "0c003abf-6288-4d54-8c91-07c1eebe0123") : configmap "swift-ring-files" not found Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.908090 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-ptl2z"] Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.908894 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.912239 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-proxy-config-data" Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.912529 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-config-data" Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.912547 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-scripts" Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.929249 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-ptl2z"] Nov 26 13:36:10 crc kubenswrapper[4747]: E1126 13:36:10.929607 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[dispersionconf etc-swift kube-api-access-2mhch ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[dispersionconf etc-swift kube-api-access-2mhch ring-data-devices scripts swiftconf]: context canceled" pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" podUID="efd9e1cf-d062-485a-b474-5b1dab4e5daa" Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.953917 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-pc4vw"] Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.955221 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.959259 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-ptl2z"] Nov 26 13:36:10 crc kubenswrapper[4747]: I1126 13:36:10.968026 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-pc4vw"] Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.012616 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-ring-data-devices\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.012692 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-swiftconf\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.012719 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-dispersionconf\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.012816 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-scripts\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.012865 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/efd9e1cf-d062-485a-b474-5b1dab4e5daa-etc-swift\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.012892 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhch\" (UniqueName: \"kubernetes.io/projected/efd9e1cf-d062-485a-b474-5b1dab4e5daa-kube-api-access-2mhch\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.116702 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-dispersionconf\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.116967 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-etc-swift\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.117085 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-scripts\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.117159 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/efd9e1cf-d062-485a-b474-5b1dab4e5daa-etc-swift\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.117246 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mhch\" (UniqueName: \"kubernetes.io/projected/efd9e1cf-d062-485a-b474-5b1dab4e5daa-kube-api-access-2mhch\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.117332 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-dispersionconf\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.117394 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zsxs\" (UniqueName: \"kubernetes.io/projected/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-kube-api-access-6zsxs\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.117461 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-ring-data-devices\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.117539 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-scripts\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.117604 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-ring-data-devices\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.117678 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-swiftconf\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.117752 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-swiftconf\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.123664 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-scripts\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.123903 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/efd9e1cf-d062-485a-b474-5b1dab4e5daa-etc-swift\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.124389 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-ring-data-devices\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.124440 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-swiftconf\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.130437 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-dispersionconf\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.145630 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mhch\" (UniqueName: \"kubernetes.io/projected/efd9e1cf-d062-485a-b474-5b1dab4e5daa-kube-api-access-2mhch\") pod \"swift-ring-rebalance-ptl2z\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.200950 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.212146 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.218765 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zsxs\" (UniqueName: \"kubernetes.io/projected/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-kube-api-access-6zsxs\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.218804 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-dispersionconf\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.218826 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-ring-data-devices\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.218852 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-scripts\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.218875 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-swiftconf\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.218903 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-etc-swift\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.219624 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-scripts\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.219722 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-etc-swift\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.220045 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-ring-data-devices\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.222591 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-dispersionconf\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.234515 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zsxs\" (UniqueName: \"kubernetes.io/projected/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-kube-api-access-6zsxs\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.237926 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-swiftconf\") pod \"swift-ring-rebalance-pc4vw\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.320266 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-ring-data-devices\") pod \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.320326 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/efd9e1cf-d062-485a-b474-5b1dab4e5daa-etc-swift\") pod \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.320538 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd9e1cf-d062-485a-b474-5b1dab4e5daa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "efd9e1cf-d062-485a-b474-5b1dab4e5daa" (UID: "efd9e1cf-d062-485a-b474-5b1dab4e5daa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.320799 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "efd9e1cf-d062-485a-b474-5b1dab4e5daa" (UID: "efd9e1cf-d062-485a-b474-5b1dab4e5daa"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.320844 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mhch\" (UniqueName: \"kubernetes.io/projected/efd9e1cf-d062-485a-b474-5b1dab4e5daa-kube-api-access-2mhch\") pod \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.320916 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-dispersionconf\") pod \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.320978 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-swiftconf\") pod \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.321009 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-scripts\") pod \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\" (UID: \"efd9e1cf-d062-485a-b474-5b1dab4e5daa\") " Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.321835 4747 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.322024 4747 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/efd9e1cf-d062-485a-b474-5b1dab4e5daa-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.347583 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-scripts" (OuterVolumeSpecName: "scripts") pod "efd9e1cf-d062-485a-b474-5b1dab4e5daa" (UID: "efd9e1cf-d062-485a-b474-5b1dab4e5daa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.350122 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd9e1cf-d062-485a-b474-5b1dab4e5daa-kube-api-access-2mhch" (OuterVolumeSpecName: "kube-api-access-2mhch") pod "efd9e1cf-d062-485a-b474-5b1dab4e5daa" (UID: "efd9e1cf-d062-485a-b474-5b1dab4e5daa"). InnerVolumeSpecName "kube-api-access-2mhch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.350452 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "efd9e1cf-d062-485a-b474-5b1dab4e5daa" (UID: "efd9e1cf-d062-485a-b474-5b1dab4e5daa"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.350764 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.358140 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "efd9e1cf-d062-485a-b474-5b1dab4e5daa" (UID: "efd9e1cf-d062-485a-b474-5b1dab4e5daa"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.423740 4747 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.423772 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efd9e1cf-d062-485a-b474-5b1dab4e5daa-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.423784 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mhch\" (UniqueName: \"kubernetes.io/projected/efd9e1cf-d062-485a-b474-5b1dab4e5daa-kube-api-access-2mhch\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.423795 4747 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/efd9e1cf-d062-485a-b474-5b1dab4e5daa-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:11 crc kubenswrapper[4747]: I1126 13:36:11.426511 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-4szxh"] Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.030209 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-index-5zwq9"] Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.031707 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-5zwq9" Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.050162 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-5zwq9"] Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.137442 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftxqr\" (UniqueName: \"kubernetes.io/projected/c34b19b7-bcf1-4d32-8f5f-e5596cac7b63-kube-api-access-ftxqr\") pod \"glance-operator-index-5zwq9\" (UID: \"c34b19b7-bcf1-4d32-8f5f-e5596cac7b63\") " pod="openstack-operators/glance-operator-index-5zwq9" Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.207628 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-ptl2z" Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.242874 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-ptl2z"] Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.249327 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftxqr\" (UniqueName: \"kubernetes.io/projected/c34b19b7-bcf1-4d32-8f5f-e5596cac7b63-kube-api-access-ftxqr\") pod \"glance-operator-index-5zwq9\" (UID: \"c34b19b7-bcf1-4d32-8f5f-e5596cac7b63\") " pod="openstack-operators/glance-operator-index-5zwq9" Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.252971 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-ptl2z"] Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.269855 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftxqr\" (UniqueName: \"kubernetes.io/projected/c34b19b7-bcf1-4d32-8f5f-e5596cac7b63-kube-api-access-ftxqr\") pod \"glance-operator-index-5zwq9\" (UID: \"c34b19b7-bcf1-4d32-8f5f-e5596cac7b63\") " pod="openstack-operators/glance-operator-index-5zwq9" Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.353658 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-5zwq9" Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.484879 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-pc4vw"] Nov 26 13:36:12 crc kubenswrapper[4747]: W1126 13:36:12.499389 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52f178ee_ada4_42e2_85c0_a4ebf5322d5d.slice/crio-f439c9a3ce84bf3bf2a7d3b39540b197a983334150b8697fdc73fd3e689fe3e4 WatchSource:0}: Error finding container f439c9a3ce84bf3bf2a7d3b39540b197a983334150b8697fdc73fd3e689fe3e4: Status 404 returned error can't find the container with id f439c9a3ce84bf3bf2a7d3b39540b197a983334150b8697fdc73fd3e689fe3e4 Nov 26 13:36:12 crc kubenswrapper[4747]: I1126 13:36:12.775915 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-5zwq9"] Nov 26 13:36:12 crc kubenswrapper[4747]: W1126 13:36:12.784334 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc34b19b7_bcf1_4d32_8f5f_e5596cac7b63.slice/crio-18a9781d74cee5b688c2c30ccb39a58a61188020e9ee298d2dbcff4adc10cd1c WatchSource:0}: Error finding container 18a9781d74cee5b688c2c30ccb39a58a61188020e9ee298d2dbcff4adc10cd1c: Status 404 returned error can't find the container with id 18a9781d74cee5b688c2c30ccb39a58a61188020e9ee298d2dbcff4adc10cd1c Nov 26 13:36:13 crc kubenswrapper[4747]: I1126 13:36:13.227376 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/glance-operator-index-4szxh" podUID="9554630a-ba77-46dc-9f92-b2a9591a69f7" containerName="registry-server" containerID="cri-o://db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d" gracePeriod=2 Nov 26 13:36:13 crc kubenswrapper[4747]: I1126 13:36:13.227723 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-4szxh" event={"ID":"9554630a-ba77-46dc-9f92-b2a9591a69f7","Type":"ContainerStarted","Data":"db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d"} Nov 26 13:36:13 crc kubenswrapper[4747]: I1126 13:36:13.229331 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" event={"ID":"52f178ee-ada4-42e2-85c0-a4ebf5322d5d","Type":"ContainerStarted","Data":"f439c9a3ce84bf3bf2a7d3b39540b197a983334150b8697fdc73fd3e689fe3e4"} Nov 26 13:36:13 crc kubenswrapper[4747]: I1126 13:36:13.234259 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-5zwq9" event={"ID":"c34b19b7-bcf1-4d32-8f5f-e5596cac7b63","Type":"ContainerStarted","Data":"18a9781d74cee5b688c2c30ccb39a58a61188020e9ee298d2dbcff4adc10cd1c"} Nov 26 13:36:13 crc kubenswrapper[4747]: I1126 13:36:13.249617 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-index-4szxh" podStartSLOduration=1.9047255760000001 podStartE2EDuration="6.249591976s" podCreationTimestamp="2025-11-26 13:36:07 +0000 UTC" firstStartedPulling="2025-11-26 13:36:07.904674477 +0000 UTC m=+1254.890985492" lastFinishedPulling="2025-11-26 13:36:12.249540877 +0000 UTC m=+1259.235851892" observedRunningTime="2025-11-26 13:36:13.243565096 +0000 UTC m=+1260.229876121" watchObservedRunningTime="2025-11-26 13:36:13.249591976 +0000 UTC m=+1260.235903001" Nov 26 13:36:13 crc kubenswrapper[4747]: I1126 13:36:13.720426 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-4szxh" Nov 26 13:36:13 crc kubenswrapper[4747]: I1126 13:36:13.809302 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efd9e1cf-d062-485a-b474-5b1dab4e5daa" path="/var/lib/kubelet/pods/efd9e1cf-d062-485a-b474-5b1dab4e5daa/volumes" Nov 26 13:36:13 crc kubenswrapper[4747]: I1126 13:36:13.877032 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmxk4\" (UniqueName: \"kubernetes.io/projected/9554630a-ba77-46dc-9f92-b2a9591a69f7-kube-api-access-fmxk4\") pod \"9554630a-ba77-46dc-9f92-b2a9591a69f7\" (UID: \"9554630a-ba77-46dc-9f92-b2a9591a69f7\") " Nov 26 13:36:13 crc kubenswrapper[4747]: I1126 13:36:13.884577 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9554630a-ba77-46dc-9f92-b2a9591a69f7-kube-api-access-fmxk4" (OuterVolumeSpecName: "kube-api-access-fmxk4") pod "9554630a-ba77-46dc-9f92-b2a9591a69f7" (UID: "9554630a-ba77-46dc-9f92-b2a9591a69f7"). InnerVolumeSpecName "kube-api-access-fmxk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:13 crc kubenswrapper[4747]: I1126 13:36:13.980112 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmxk4\" (UniqueName: \"kubernetes.io/projected/9554630a-ba77-46dc-9f92-b2a9591a69f7-kube-api-access-fmxk4\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.250998 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-5zwq9" event={"ID":"c34b19b7-bcf1-4d32-8f5f-e5596cac7b63","Type":"ContainerStarted","Data":"70d4d03ac2169722e8b4992f400c68b0646777a61dd734006d9f42547af45aff"} Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.252726 4747 generic.go:334] "Generic (PLEG): container finished" podID="9554630a-ba77-46dc-9f92-b2a9591a69f7" containerID="db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d" exitCode=0 Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.252771 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-4szxh" event={"ID":"9554630a-ba77-46dc-9f92-b2a9591a69f7","Type":"ContainerDied","Data":"db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d"} Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.252787 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-4szxh" event={"ID":"9554630a-ba77-46dc-9f92-b2a9591a69f7","Type":"ContainerDied","Data":"607f9694177fceb7dd8edef358e8179116805aa83380c5dd64d7881eec7e1f2f"} Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.252804 4747 scope.go:117] "RemoveContainer" containerID="db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d" Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.252802 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-4szxh" Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.278745 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-index-5zwq9" podStartSLOduration=1.9146245469999998 podStartE2EDuration="2.278722421s" podCreationTimestamp="2025-11-26 13:36:12 +0000 UTC" firstStartedPulling="2025-11-26 13:36:12.788878935 +0000 UTC m=+1259.775189950" lastFinishedPulling="2025-11-26 13:36:13.152976799 +0000 UTC m=+1260.139287824" observedRunningTime="2025-11-26 13:36:14.272382794 +0000 UTC m=+1261.258693829" watchObservedRunningTime="2025-11-26 13:36:14.278722421 +0000 UTC m=+1261.265033436" Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.321304 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-4szxh"] Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.323110 4747 scope.go:117] "RemoveContainer" containerID="db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d" Nov 26 13:36:14 crc kubenswrapper[4747]: E1126 13:36:14.323537 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d\": container with ID starting with db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d not found: ID does not exist" containerID="db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d" Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.323577 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d"} err="failed to get container status \"db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d\": rpc error: code = NotFound desc = could not find container \"db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d\": container with ID starting with db847dff20a7eb22e8e78a94e2b4ed3f86fd0fdc538df6249c9dd75a7c09098d not found: ID does not exist" Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.336451 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/glance-operator-index-4szxh"] Nov 26 13:36:14 crc kubenswrapper[4747]: I1126 13:36:14.692506 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:14 crc kubenswrapper[4747]: E1126 13:36:14.692684 4747 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:36:14 crc kubenswrapper[4747]: E1126 13:36:14.692701 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:36:14 crc kubenswrapper[4747]: E1126 13:36:14.692751 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift podName:0c003abf-6288-4d54-8c91-07c1eebe0123 nodeName:}" failed. No retries permitted until 2025-11-26 13:36:22.692732926 +0000 UTC m=+1269.679043941 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift") pod "swift-storage-0" (UID: "0c003abf-6288-4d54-8c91-07c1eebe0123") : configmap "swift-ring-files" not found Nov 26 13:36:15 crc kubenswrapper[4747]: I1126 13:36:15.807232 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9554630a-ba77-46dc-9f92-b2a9591a69f7" path="/var/lib/kubelet/pods/9554630a-ba77-46dc-9f92-b2a9591a69f7/volumes" Nov 26 13:36:21 crc kubenswrapper[4747]: I1126 13:36:21.311587 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" event={"ID":"52f178ee-ada4-42e2-85c0-a4ebf5322d5d","Type":"ContainerStarted","Data":"38c36facfd0ff14bd67921d0c20d2e2080fcbaf6d0b0f92d07dde63813a0370a"} Nov 26 13:36:21 crc kubenswrapper[4747]: I1126 13:36:21.334665 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" podStartSLOduration=3.204764254 podStartE2EDuration="11.334646288s" podCreationTimestamp="2025-11-26 13:36:10 +0000 UTC" firstStartedPulling="2025-11-26 13:36:12.505917947 +0000 UTC m=+1259.492228962" lastFinishedPulling="2025-11-26 13:36:20.635799981 +0000 UTC m=+1267.622110996" observedRunningTime="2025-11-26 13:36:21.334456824 +0000 UTC m=+1268.320767829" watchObservedRunningTime="2025-11-26 13:36:21.334646288 +0000 UTC m=+1268.320957303" Nov 26 13:36:22 crc kubenswrapper[4747]: I1126 13:36:22.354424 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-index-5zwq9" Nov 26 13:36:22 crc kubenswrapper[4747]: I1126 13:36:22.356090 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/glance-operator-index-5zwq9" Nov 26 13:36:22 crc kubenswrapper[4747]: I1126 13:36:22.404453 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/glance-operator-index-5zwq9" Nov 26 13:36:22 crc kubenswrapper[4747]: I1126 13:36:22.722951 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:22 crc kubenswrapper[4747]: E1126 13:36:22.723282 4747 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:36:22 crc kubenswrapper[4747]: E1126 13:36:22.723335 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:36:22 crc kubenswrapper[4747]: E1126 13:36:22.723450 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift podName:0c003abf-6288-4d54-8c91-07c1eebe0123 nodeName:}" failed. No retries permitted until 2025-11-26 13:36:38.723411825 +0000 UTC m=+1285.709722880 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift") pod "swift-storage-0" (UID: "0c003abf-6288-4d54-8c91-07c1eebe0123") : configmap "swift-ring-files" not found Nov 26 13:36:23 crc kubenswrapper[4747]: I1126 13:36:23.382354 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-index-5zwq9" Nov 26 13:36:28 crc kubenswrapper[4747]: I1126 13:36:28.381082 4747 generic.go:334] "Generic (PLEG): container finished" podID="52f178ee-ada4-42e2-85c0-a4ebf5322d5d" containerID="38c36facfd0ff14bd67921d0c20d2e2080fcbaf6d0b0f92d07dde63813a0370a" exitCode=0 Nov 26 13:36:28 crc kubenswrapper[4747]: I1126 13:36:28.381258 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" event={"ID":"52f178ee-ada4-42e2-85c0-a4ebf5322d5d","Type":"ContainerDied","Data":"38c36facfd0ff14bd67921d0c20d2e2080fcbaf6d0b0f92d07dde63813a0370a"} Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.693689 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.831952 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zsxs\" (UniqueName: \"kubernetes.io/projected/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-kube-api-access-6zsxs\") pod \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.831996 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-scripts\") pod \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.832018 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-ring-data-devices\") pod \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.832693 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "52f178ee-ada4-42e2-85c0-a4ebf5322d5d" (UID: "52f178ee-ada4-42e2-85c0-a4ebf5322d5d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.833096 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-dispersionconf\") pod \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.833169 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-etc-swift\") pod \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.833205 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-swiftconf\") pod \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\" (UID: \"52f178ee-ada4-42e2-85c0-a4ebf5322d5d\") " Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.833587 4747 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.833838 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "52f178ee-ada4-42e2-85c0-a4ebf5322d5d" (UID: "52f178ee-ada4-42e2-85c0-a4ebf5322d5d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.853903 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-kube-api-access-6zsxs" (OuterVolumeSpecName: "kube-api-access-6zsxs") pod "52f178ee-ada4-42e2-85c0-a4ebf5322d5d" (UID: "52f178ee-ada4-42e2-85c0-a4ebf5322d5d"). InnerVolumeSpecName "kube-api-access-6zsxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.858249 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "52f178ee-ada4-42e2-85c0-a4ebf5322d5d" (UID: "52f178ee-ada4-42e2-85c0-a4ebf5322d5d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.864478 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "52f178ee-ada4-42e2-85c0-a4ebf5322d5d" (UID: "52f178ee-ada4-42e2-85c0-a4ebf5322d5d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.866796 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-scripts" (OuterVolumeSpecName: "scripts") pod "52f178ee-ada4-42e2-85c0-a4ebf5322d5d" (UID: "52f178ee-ada4-42e2-85c0-a4ebf5322d5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.935016 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zsxs\" (UniqueName: \"kubernetes.io/projected/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-kube-api-access-6zsxs\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.935119 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.935145 4747 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.935173 4747 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:29 crc kubenswrapper[4747]: I1126 13:36:29.935199 4747 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52f178ee-ada4-42e2-85c0-a4ebf5322d5d-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:30 crc kubenswrapper[4747]: I1126 13:36:30.230787 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/keystone-6587955ddb-c2fhn" Nov 26 13:36:30 crc kubenswrapper[4747]: I1126 13:36:30.397672 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" event={"ID":"52f178ee-ada4-42e2-85c0-a4ebf5322d5d","Type":"ContainerDied","Data":"f439c9a3ce84bf3bf2a7d3b39540b197a983334150b8697fdc73fd3e689fe3e4"} Nov 26 13:36:30 crc kubenswrapper[4747]: I1126 13:36:30.397717 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f439c9a3ce84bf3bf2a7d3b39540b197a983334150b8697fdc73fd3e689fe3e4" Nov 26 13:36:30 crc kubenswrapper[4747]: I1126 13:36:30.397774 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-pc4vw" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.692231 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv"] Nov 26 13:36:32 crc kubenswrapper[4747]: E1126 13:36:32.693166 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9554630a-ba77-46dc-9f92-b2a9591a69f7" containerName="registry-server" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.693184 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9554630a-ba77-46dc-9f92-b2a9591a69f7" containerName="registry-server" Nov 26 13:36:32 crc kubenswrapper[4747]: E1126 13:36:32.693209 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f178ee-ada4-42e2-85c0-a4ebf5322d5d" containerName="swift-ring-rebalance" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.693217 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f178ee-ada4-42e2-85c0-a4ebf5322d5d" containerName="swift-ring-rebalance" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.693368 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f178ee-ada4-42e2-85c0-a4ebf5322d5d" containerName="swift-ring-rebalance" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.693382 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9554630a-ba77-46dc-9f92-b2a9591a69f7" containerName="registry-server" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.696004 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.698753 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rm7vv" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.715309 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv"] Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.789466 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-bundle\") pod \"0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.789518 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-util\") pod \"0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.789580 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlz8j\" (UniqueName: \"kubernetes.io/projected/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-kube-api-access-hlz8j\") pod \"0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.891180 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-bundle\") pod \"0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.891223 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-util\") pod \"0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.891270 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlz8j\" (UniqueName: \"kubernetes.io/projected/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-kube-api-access-hlz8j\") pod \"0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.891753 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-bundle\") pod \"0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.891988 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-util\") pod \"0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:32 crc kubenswrapper[4747]: I1126 13:36:32.917127 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlz8j\" (UniqueName: \"kubernetes.io/projected/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-kube-api-access-hlz8j\") pod \"0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.017437 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.479109 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr"] Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.480856 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.490998 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr"] Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.495913 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-proxy-config-data" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.526205 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv"] Nov 26 13:36:33 crc kubenswrapper[4747]: W1126 13:36:33.531502 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55715f5c_5a0d_4d68_9a7f_c4918d4fe9d1.slice/crio-31ea8d46264dff256ab9cc0b081306ec37bb6f67af4b9a8e2b6f40a433c4f756 WatchSource:0}: Error finding container 31ea8d46264dff256ab9cc0b081306ec37bb6f67af4b9a8e2b6f40a433c4f756: Status 404 returned error can't find the container with id 31ea8d46264dff256ab9cc0b081306ec37bb6f67af4b9a8e2b6f40a433c4f756 Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.602247 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e67117b2-ffe7-4796-8b7b-6f0065a87846-etc-swift\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.602300 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67117b2-ffe7-4796-8b7b-6f0065a87846-config-data\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.602326 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e67117b2-ffe7-4796-8b7b-6f0065a87846-run-httpd\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.602605 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e67117b2-ffe7-4796-8b7b-6f0065a87846-log-httpd\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.602763 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8lng\" (UniqueName: \"kubernetes.io/projected/e67117b2-ffe7-4796-8b7b-6f0065a87846-kube-api-access-g8lng\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.703920 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8lng\" (UniqueName: \"kubernetes.io/projected/e67117b2-ffe7-4796-8b7b-6f0065a87846-kube-api-access-g8lng\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.703994 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e67117b2-ffe7-4796-8b7b-6f0065a87846-etc-swift\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.704033 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67117b2-ffe7-4796-8b7b-6f0065a87846-config-data\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.704080 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e67117b2-ffe7-4796-8b7b-6f0065a87846-run-httpd\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.704187 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e67117b2-ffe7-4796-8b7b-6f0065a87846-log-httpd\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.705021 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e67117b2-ffe7-4796-8b7b-6f0065a87846-run-httpd\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.705176 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e67117b2-ffe7-4796-8b7b-6f0065a87846-log-httpd\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.709944 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67117b2-ffe7-4796-8b7b-6f0065a87846-config-data\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.710003 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e67117b2-ffe7-4796-8b7b-6f0065a87846-etc-swift\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.728176 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8lng\" (UniqueName: \"kubernetes.io/projected/e67117b2-ffe7-4796-8b7b-6f0065a87846-kube-api-access-g8lng\") pod \"swift-proxy-6bd58cfcf7-flzjr\" (UID: \"e67117b2-ffe7-4796-8b7b-6f0065a87846\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:33 crc kubenswrapper[4747]: I1126 13:36:33.799179 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:34 crc kubenswrapper[4747]: I1126 13:36:34.304383 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr"] Nov 26 13:36:34 crc kubenswrapper[4747]: I1126 13:36:34.432541 4747 generic.go:334] "Generic (PLEG): container finished" podID="55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" containerID="1e90efe9415de3380a3187d6df222cd0709f1d0099a2d437a1457775502a9e28" exitCode=0 Nov 26 13:36:34 crc kubenswrapper[4747]: I1126 13:36:34.432607 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" event={"ID":"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1","Type":"ContainerDied","Data":"1e90efe9415de3380a3187d6df222cd0709f1d0099a2d437a1457775502a9e28"} Nov 26 13:36:34 crc kubenswrapper[4747]: I1126 13:36:34.432632 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" event={"ID":"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1","Type":"ContainerStarted","Data":"31ea8d46264dff256ab9cc0b081306ec37bb6f67af4b9a8e2b6f40a433c4f756"} Nov 26 13:36:34 crc kubenswrapper[4747]: I1126 13:36:34.433972 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" event={"ID":"e67117b2-ffe7-4796-8b7b-6f0065a87846","Type":"ContainerStarted","Data":"7a6ea1724326bba3c5a0c7ef53e9b987c5eba43ffa94c6eb61514053c4eb1ff3"} Nov 26 13:36:35 crc kubenswrapper[4747]: I1126 13:36:35.442511 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" event={"ID":"e67117b2-ffe7-4796-8b7b-6f0065a87846","Type":"ContainerStarted","Data":"ef52b17b74d752e42f189fdf908051e13207d1e90a690a42519edccf26569018"} Nov 26 13:36:35 crc kubenswrapper[4747]: I1126 13:36:35.442559 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" event={"ID":"e67117b2-ffe7-4796-8b7b-6f0065a87846","Type":"ContainerStarted","Data":"5684d5dc9650287917551b797fa776f298ff8de61749226bb8115d676b507c8d"} Nov 26 13:36:35 crc kubenswrapper[4747]: I1126 13:36:35.442674 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:35 crc kubenswrapper[4747]: I1126 13:36:35.442699 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:35 crc kubenswrapper[4747]: I1126 13:36:35.463878 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" podStartSLOduration=2.463856927 podStartE2EDuration="2.463856927s" podCreationTimestamp="2025-11-26 13:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:36:35.458464781 +0000 UTC m=+1282.444775796" watchObservedRunningTime="2025-11-26 13:36:35.463856927 +0000 UTC m=+1282.450167942" Nov 26 13:36:36 crc kubenswrapper[4747]: I1126 13:36:36.452839 4747 generic.go:334] "Generic (PLEG): container finished" podID="55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" containerID="6d9a5177f84666636fdc4bf2c4585b7f87b6fad3adf4b317caa8ae1c9e55c17b" exitCode=0 Nov 26 13:36:36 crc kubenswrapper[4747]: I1126 13:36:36.452885 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" event={"ID":"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1","Type":"ContainerDied","Data":"6d9a5177f84666636fdc4bf2c4585b7f87b6fad3adf4b317caa8ae1c9e55c17b"} Nov 26 13:36:37 crc kubenswrapper[4747]: I1126 13:36:37.465548 4747 generic.go:334] "Generic (PLEG): container finished" podID="55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" containerID="df87fe3537c4daeaa956a938ce93e5cd7312300731a56f4d06510ab73bff0d4c" exitCode=0 Nov 26 13:36:37 crc kubenswrapper[4747]: I1126 13:36:37.465623 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" event={"ID":"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1","Type":"ContainerDied","Data":"df87fe3537c4daeaa956a938ce93e5cd7312300731a56f4d06510ab73bff0d4c"} Nov 26 13:36:38 crc kubenswrapper[4747]: I1126 13:36:38.778218 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:38 crc kubenswrapper[4747]: I1126 13:36:38.785315 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0c003abf-6288-4d54-8c91-07c1eebe0123-etc-swift\") pod \"swift-storage-0\" (UID: \"0c003abf-6288-4d54-8c91-07c1eebe0123\") " pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:38 crc kubenswrapper[4747]: I1126 13:36:38.816291 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:38 crc kubenswrapper[4747]: I1126 13:36:38.973807 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Nov 26 13:36:38 crc kubenswrapper[4747]: I1126 13:36:38.984903 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlz8j\" (UniqueName: \"kubernetes.io/projected/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-kube-api-access-hlz8j\") pod \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " Nov 26 13:36:38 crc kubenswrapper[4747]: I1126 13:36:38.985184 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-util\") pod \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " Nov 26 13:36:38 crc kubenswrapper[4747]: I1126 13:36:38.985445 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-bundle\") pod \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\" (UID: \"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1\") " Nov 26 13:36:38 crc kubenswrapper[4747]: I1126 13:36:38.986975 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-bundle" (OuterVolumeSpecName: "bundle") pod "55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" (UID: "55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:38 crc kubenswrapper[4747]: I1126 13:36:38.990600 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-kube-api-access-hlz8j" (OuterVolumeSpecName: "kube-api-access-hlz8j") pod "55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" (UID: "55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1"). InnerVolumeSpecName "kube-api-access-hlz8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:39 crc kubenswrapper[4747]: I1126 13:36:39.008998 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-util" (OuterVolumeSpecName: "util") pod "55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" (UID: "55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:39 crc kubenswrapper[4747]: I1126 13:36:39.087205 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlz8j\" (UniqueName: \"kubernetes.io/projected/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-kube-api-access-hlz8j\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:39 crc kubenswrapper[4747]: I1126 13:36:39.089451 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:39 crc kubenswrapper[4747]: I1126 13:36:39.089464 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:39 crc kubenswrapper[4747]: I1126 13:36:39.436201 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Nov 26 13:36:39 crc kubenswrapper[4747]: W1126 13:36:39.444750 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c003abf_6288_4d54_8c91_07c1eebe0123.slice/crio-eba712a9518cf3a6d8e374abd0f5397de8c27cfefabbc8fe388123adf42da686 WatchSource:0}: Error finding container eba712a9518cf3a6d8e374abd0f5397de8c27cfefabbc8fe388123adf42da686: Status 404 returned error can't find the container with id eba712a9518cf3a6d8e374abd0f5397de8c27cfefabbc8fe388123adf42da686 Nov 26 13:36:39 crc kubenswrapper[4747]: I1126 13:36:39.483583 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"eba712a9518cf3a6d8e374abd0f5397de8c27cfefabbc8fe388123adf42da686"} Nov 26 13:36:39 crc kubenswrapper[4747]: I1126 13:36:39.491098 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" event={"ID":"55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1","Type":"ContainerDied","Data":"31ea8d46264dff256ab9cc0b081306ec37bb6f67af4b9a8e2b6f40a433c4f756"} Nov 26 13:36:39 crc kubenswrapper[4747]: I1126 13:36:39.491155 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv" Nov 26 13:36:39 crc kubenswrapper[4747]: I1126 13:36:39.491163 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ea8d46264dff256ab9cc0b081306ec37bb6f67af4b9a8e2b6f40a433c4f756" Nov 26 13:36:43 crc kubenswrapper[4747]: I1126 13:36:43.809478 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:43 crc kubenswrapper[4747]: I1126 13:36:43.811291 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-flzjr" Nov 26 13:36:44 crc kubenswrapper[4747]: I1126 13:36:44.524516 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"d70df640bb898e0b45c680fcea8bff19412e9d7318955e646ab0fae2e07d969f"} Nov 26 13:36:44 crc kubenswrapper[4747]: I1126 13:36:44.524748 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"c2b83db8a863e58254cc85eaaa3f11fdcc4403024086493ca971cb2d6b4099eb"} Nov 26 13:36:44 crc kubenswrapper[4747]: I1126 13:36:44.524759 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"6c3d582aa666a0bb04841413eabb8b101d0012fa8074c38fd2ddb81c0539d27d"} Nov 26 13:36:44 crc kubenswrapper[4747]: I1126 13:36:44.524767 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"7cdac68a6263bd24060e4b2fff8bf574611f0d0ea5c6f2a49365e2bd9133f814"} Nov 26 13:36:46 crc kubenswrapper[4747]: I1126 13:36:46.562474 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"7ad8eafbfb821c542969a112703ffcc7f007c77e31da0282ff1b0d1bce0c07ad"} Nov 26 13:36:46 crc kubenswrapper[4747]: I1126 13:36:46.563172 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"2519cd1b92dc3c1017dca55e520b17885ec864d365105fbaef67a66a2c0aea19"} Nov 26 13:36:46 crc kubenswrapper[4747]: I1126 13:36:46.563190 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"d8367d01d598c75bb703f20757f1e2d977f07105331b04cd1828f4fa3bf74258"} Nov 26 13:36:46 crc kubenswrapper[4747]: I1126 13:36:46.563204 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"ab37a584acdefc0179eeac6112e4ec1c79ed8de7fccaa479e6d28f91e8f7f621"} Nov 26 13:36:47 crc kubenswrapper[4747]: I1126 13:36:47.577381 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"1dc6ff06a26116e1407261ce3d92eb057fd173a3d679ccad18097c3d8dfeb3f3"} Nov 26 13:36:48 crc kubenswrapper[4747]: I1126 13:36:48.594641 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"d905ac33aca31b81a8926907b1693c9157b77a8aad87ce73e02583c71c40b2b4"} Nov 26 13:36:48 crc kubenswrapper[4747]: I1126 13:36:48.596433 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"5bd568c12936f28452e483f83214d921bd77f8e69e8cb5f27eefd64a84f633d1"} Nov 26 13:36:49 crc kubenswrapper[4747]: I1126 13:36:49.612094 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"7eb7d8e84228ea66e8f80a9d6b0f191a4bc23a625f2c86aca05ba813b612c8bc"} Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.624393 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"21eac1f98252e6fbdffffabc13a80c961e99d669be1fc2bbeeb0eaf10adb664d"} Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.624818 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"b8b7a7ad1f2d52f53ee3ac3469c8bfe879497add060827c94e5da2207eaf1b3a"} Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.624841 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"0c003abf-6288-4d54-8c91-07c1eebe0123","Type":"ContainerStarted","Data":"8503a803f2df5313f46dca1c3f4e94843adc6690509ac617596ca26359a2e46f"} Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.668935 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-storage-0" podStartSLOduration=37.829883913 podStartE2EDuration="45.668911486s" podCreationTimestamp="2025-11-26 13:36:05 +0000 UTC" firstStartedPulling="2025-11-26 13:36:39.447365246 +0000 UTC m=+1286.433676301" lastFinishedPulling="2025-11-26 13:36:47.286392859 +0000 UTC m=+1294.272703874" observedRunningTime="2025-11-26 13:36:50.666001149 +0000 UTC m=+1297.652312174" watchObservedRunningTime="2025-11-26 13:36:50.668911486 +0000 UTC m=+1297.655222501" Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.988844 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn"] Nov 26 13:36:50 crc kubenswrapper[4747]: E1126 13:36:50.989276 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" containerName="pull" Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.989298 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" containerName="pull" Nov 26 13:36:50 crc kubenswrapper[4747]: E1126 13:36:50.989311 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" containerName="util" Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.989320 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" containerName="util" Nov 26 13:36:50 crc kubenswrapper[4747]: E1126 13:36:50.989333 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" containerName="extract" Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.989344 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" containerName="extract" Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.989522 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1" containerName="extract" Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.990168 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.991837 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-service-cert" Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.992134 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ss2kf" Nov 26 13:36:50 crc kubenswrapper[4747]: I1126 13:36:50.998556 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn"] Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.090431 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxn6p\" (UniqueName: \"kubernetes.io/projected/68495c62-1417-4aaf-8a34-08ba62255819-kube-api-access-bxn6p\") pod \"glance-operator-controller-manager-5c788c94db-hqzsn\" (UID: \"68495c62-1417-4aaf-8a34-08ba62255819\") " pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.090715 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68495c62-1417-4aaf-8a34-08ba62255819-webhook-cert\") pod \"glance-operator-controller-manager-5c788c94db-hqzsn\" (UID: \"68495c62-1417-4aaf-8a34-08ba62255819\") " pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.090822 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68495c62-1417-4aaf-8a34-08ba62255819-apiservice-cert\") pod \"glance-operator-controller-manager-5c788c94db-hqzsn\" (UID: \"68495c62-1417-4aaf-8a34-08ba62255819\") " pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.192869 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxn6p\" (UniqueName: \"kubernetes.io/projected/68495c62-1417-4aaf-8a34-08ba62255819-kube-api-access-bxn6p\") pod \"glance-operator-controller-manager-5c788c94db-hqzsn\" (UID: \"68495c62-1417-4aaf-8a34-08ba62255819\") " pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.192929 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68495c62-1417-4aaf-8a34-08ba62255819-webhook-cert\") pod \"glance-operator-controller-manager-5c788c94db-hqzsn\" (UID: \"68495c62-1417-4aaf-8a34-08ba62255819\") " pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.192955 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68495c62-1417-4aaf-8a34-08ba62255819-apiservice-cert\") pod \"glance-operator-controller-manager-5c788c94db-hqzsn\" (UID: \"68495c62-1417-4aaf-8a34-08ba62255819\") " pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.198842 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68495c62-1417-4aaf-8a34-08ba62255819-webhook-cert\") pod \"glance-operator-controller-manager-5c788c94db-hqzsn\" (UID: \"68495c62-1417-4aaf-8a34-08ba62255819\") " pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.199311 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68495c62-1417-4aaf-8a34-08ba62255819-apiservice-cert\") pod \"glance-operator-controller-manager-5c788c94db-hqzsn\" (UID: \"68495c62-1417-4aaf-8a34-08ba62255819\") " pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.213158 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxn6p\" (UniqueName: \"kubernetes.io/projected/68495c62-1417-4aaf-8a34-08ba62255819-kube-api-access-bxn6p\") pod \"glance-operator-controller-manager-5c788c94db-hqzsn\" (UID: \"68495c62-1417-4aaf-8a34-08ba62255819\") " pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.311129 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:51 crc kubenswrapper[4747]: I1126 13:36:51.738871 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn"] Nov 26 13:36:51 crc kubenswrapper[4747]: W1126 13:36:51.759118 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68495c62_1417_4aaf_8a34_08ba62255819.slice/crio-971ce2c5b9139e9c0d9bea539772f1cd37ede0f3c8e08ad38d30293670cca425 WatchSource:0}: Error finding container 971ce2c5b9139e9c0d9bea539772f1cd37ede0f3c8e08ad38d30293670cca425: Status 404 returned error can't find the container with id 971ce2c5b9139e9c0d9bea539772f1cd37ede0f3c8e08ad38d30293670cca425 Nov 26 13:36:52 crc kubenswrapper[4747]: I1126 13:36:52.646181 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" event={"ID":"68495c62-1417-4aaf-8a34-08ba62255819","Type":"ContainerStarted","Data":"971ce2c5b9139e9c0d9bea539772f1cd37ede0f3c8e08ad38d30293670cca425"} Nov 26 13:36:55 crc kubenswrapper[4747]: I1126 13:36:55.671474 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" event={"ID":"68495c62-1417-4aaf-8a34-08ba62255819","Type":"ContainerStarted","Data":"b70dbca8752cccd1be2809dddbc00080cdb5da64603a2563ba0ec276c8ba9905"} Nov 26 13:36:55 crc kubenswrapper[4747]: I1126 13:36:55.672193 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:36:55 crc kubenswrapper[4747]: I1126 13:36:55.700855 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" podStartSLOduration=2.45802013 podStartE2EDuration="5.700828749s" podCreationTimestamp="2025-11-26 13:36:50 +0000 UTC" firstStartedPulling="2025-11-26 13:36:51.761432736 +0000 UTC m=+1298.747743761" lastFinishedPulling="2025-11-26 13:36:55.004241365 +0000 UTC m=+1301.990552380" observedRunningTime="2025-11-26 13:36:55.69487113 +0000 UTC m=+1302.681182215" watchObservedRunningTime="2025-11-26 13:36:55.700828749 +0000 UTC m=+1302.687139804" Nov 26 13:37:01 crc kubenswrapper[4747]: I1126 13:37:01.314490 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5c788c94db-hqzsn" Nov 26 13:37:03 crc kubenswrapper[4747]: I1126 13:37:03.417828 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:37:03 crc kubenswrapper[4747]: I1126 13:37:03.418459 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.493234 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-s2qrx"] Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.494587 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-s2qrx" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.506822 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-a264-account-create-update-45t4z"] Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.508581 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.511291 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.517858 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-s2qrx"] Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.524877 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-a264-account-create-update-45t4z"] Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.634270 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-operator-scripts\") pod \"glance-a264-account-create-update-45t4z\" (UID: \"b9abdd72-5af1-4a9a-89a6-a36adf6f9474\") " pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.634324 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt5x2\" (UniqueName: \"kubernetes.io/projected/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-kube-api-access-qt5x2\") pod \"glance-a264-account-create-update-45t4z\" (UID: \"b9abdd72-5af1-4a9a-89a6-a36adf6f9474\") " pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.634584 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv84d\" (UniqueName: \"kubernetes.io/projected/2b003451-863f-480c-b319-47a28c6845c5-kube-api-access-kv84d\") pod \"glance-db-create-s2qrx\" (UID: \"2b003451-863f-480c-b319-47a28c6845c5\") " pod="glance-kuttl-tests/glance-db-create-s2qrx" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.634660 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b003451-863f-480c-b319-47a28c6845c5-operator-scripts\") pod \"glance-db-create-s2qrx\" (UID: \"2b003451-863f-480c-b319-47a28c6845c5\") " pod="glance-kuttl-tests/glance-db-create-s2qrx" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.736471 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-operator-scripts\") pod \"glance-a264-account-create-update-45t4z\" (UID: \"b9abdd72-5af1-4a9a-89a6-a36adf6f9474\") " pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.736557 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt5x2\" (UniqueName: \"kubernetes.io/projected/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-kube-api-access-qt5x2\") pod \"glance-a264-account-create-update-45t4z\" (UID: \"b9abdd72-5af1-4a9a-89a6-a36adf6f9474\") " pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.736674 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b003451-863f-480c-b319-47a28c6845c5-operator-scripts\") pod \"glance-db-create-s2qrx\" (UID: \"2b003451-863f-480c-b319-47a28c6845c5\") " pod="glance-kuttl-tests/glance-db-create-s2qrx" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.736720 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv84d\" (UniqueName: \"kubernetes.io/projected/2b003451-863f-480c-b319-47a28c6845c5-kube-api-access-kv84d\") pod \"glance-db-create-s2qrx\" (UID: \"2b003451-863f-480c-b319-47a28c6845c5\") " pod="glance-kuttl-tests/glance-db-create-s2qrx" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.737669 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-operator-scripts\") pod \"glance-a264-account-create-update-45t4z\" (UID: \"b9abdd72-5af1-4a9a-89a6-a36adf6f9474\") " pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.737731 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b003451-863f-480c-b319-47a28c6845c5-operator-scripts\") pod \"glance-db-create-s2qrx\" (UID: \"2b003451-863f-480c-b319-47a28c6845c5\") " pod="glance-kuttl-tests/glance-db-create-s2qrx" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.761942 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv84d\" (UniqueName: \"kubernetes.io/projected/2b003451-863f-480c-b319-47a28c6845c5-kube-api-access-kv84d\") pod \"glance-db-create-s2qrx\" (UID: \"2b003451-863f-480c-b319-47a28c6845c5\") " pod="glance-kuttl-tests/glance-db-create-s2qrx" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.762719 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt5x2\" (UniqueName: \"kubernetes.io/projected/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-kube-api-access-qt5x2\") pod \"glance-a264-account-create-update-45t4z\" (UID: \"b9abdd72-5af1-4a9a-89a6-a36adf6f9474\") " pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.810951 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-s2qrx" Nov 26 13:37:06 crc kubenswrapper[4747]: I1126 13:37:06.840606 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" Nov 26 13:37:07 crc kubenswrapper[4747]: I1126 13:37:07.227973 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-s2qrx"] Nov 26 13:37:07 crc kubenswrapper[4747]: W1126 13:37:07.230808 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b003451_863f_480c_b319_47a28c6845c5.slice/crio-548248f9055d83b86b261b8866d18ea2ef21d67d1fc14a9076ded2e82f9ffc3f WatchSource:0}: Error finding container 548248f9055d83b86b261b8866d18ea2ef21d67d1fc14a9076ded2e82f9ffc3f: Status 404 returned error can't find the container with id 548248f9055d83b86b261b8866d18ea2ef21d67d1fc14a9076ded2e82f9ffc3f Nov 26 13:37:07 crc kubenswrapper[4747]: I1126 13:37:07.294279 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-a264-account-create-update-45t4z"] Nov 26 13:37:07 crc kubenswrapper[4747]: W1126 13:37:07.298021 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9abdd72_5af1_4a9a_89a6_a36adf6f9474.slice/crio-5239a5246934b0a46a3fb2a8d68ae765b3341a3c0f839277ce90297aa2658188 WatchSource:0}: Error finding container 5239a5246934b0a46a3fb2a8d68ae765b3341a3c0f839277ce90297aa2658188: Status 404 returned error can't find the container with id 5239a5246934b0a46a3fb2a8d68ae765b3341a3c0f839277ce90297aa2658188 Nov 26 13:37:07 crc kubenswrapper[4747]: I1126 13:37:07.770476 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b003451-863f-480c-b319-47a28c6845c5" containerID="6198fe079b5825c46974fc6030a48424241dd404cd43ceb4b71afa9e50815abf" exitCode=0 Nov 26 13:37:07 crc kubenswrapper[4747]: I1126 13:37:07.770547 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-s2qrx" event={"ID":"2b003451-863f-480c-b319-47a28c6845c5","Type":"ContainerDied","Data":"6198fe079b5825c46974fc6030a48424241dd404cd43ceb4b71afa9e50815abf"} Nov 26 13:37:07 crc kubenswrapper[4747]: I1126 13:37:07.770575 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-s2qrx" event={"ID":"2b003451-863f-480c-b319-47a28c6845c5","Type":"ContainerStarted","Data":"548248f9055d83b86b261b8866d18ea2ef21d67d1fc14a9076ded2e82f9ffc3f"} Nov 26 13:37:07 crc kubenswrapper[4747]: I1126 13:37:07.772613 4747 generic.go:334] "Generic (PLEG): container finished" podID="b9abdd72-5af1-4a9a-89a6-a36adf6f9474" containerID="8fd023de22b2ccf833b8216a3038a3a5d62b4fdd4d8aa0deb469e147374ca049" exitCode=0 Nov 26 13:37:07 crc kubenswrapper[4747]: I1126 13:37:07.772662 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" event={"ID":"b9abdd72-5af1-4a9a-89a6-a36adf6f9474","Type":"ContainerDied","Data":"8fd023de22b2ccf833b8216a3038a3a5d62b4fdd4d8aa0deb469e147374ca049"} Nov 26 13:37:07 crc kubenswrapper[4747]: I1126 13:37:07.772691 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" event={"ID":"b9abdd72-5af1-4a9a-89a6-a36adf6f9474","Type":"ContainerStarted","Data":"5239a5246934b0a46a3fb2a8d68ae765b3341a3c0f839277ce90297aa2658188"} Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.113134 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-s2qrx" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.210635 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.274558 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b003451-863f-480c-b319-47a28c6845c5-operator-scripts\") pod \"2b003451-863f-480c-b319-47a28c6845c5\" (UID: \"2b003451-863f-480c-b319-47a28c6845c5\") " Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.274740 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv84d\" (UniqueName: \"kubernetes.io/projected/2b003451-863f-480c-b319-47a28c6845c5-kube-api-access-kv84d\") pod \"2b003451-863f-480c-b319-47a28c6845c5\" (UID: \"2b003451-863f-480c-b319-47a28c6845c5\") " Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.276005 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b003451-863f-480c-b319-47a28c6845c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b003451-863f-480c-b319-47a28c6845c5" (UID: "2b003451-863f-480c-b319-47a28c6845c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.279909 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b003451-863f-480c-b319-47a28c6845c5-kube-api-access-kv84d" (OuterVolumeSpecName: "kube-api-access-kv84d") pod "2b003451-863f-480c-b319-47a28c6845c5" (UID: "2b003451-863f-480c-b319-47a28c6845c5"). InnerVolumeSpecName "kube-api-access-kv84d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.375846 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt5x2\" (UniqueName: \"kubernetes.io/projected/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-kube-api-access-qt5x2\") pod \"b9abdd72-5af1-4a9a-89a6-a36adf6f9474\" (UID: \"b9abdd72-5af1-4a9a-89a6-a36adf6f9474\") " Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.375943 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-operator-scripts\") pod \"b9abdd72-5af1-4a9a-89a6-a36adf6f9474\" (UID: \"b9abdd72-5af1-4a9a-89a6-a36adf6f9474\") " Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.376285 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv84d\" (UniqueName: \"kubernetes.io/projected/2b003451-863f-480c-b319-47a28c6845c5-kube-api-access-kv84d\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.376303 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b003451-863f-480c-b319-47a28c6845c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.376713 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9abdd72-5af1-4a9a-89a6-a36adf6f9474" (UID: "b9abdd72-5af1-4a9a-89a6-a36adf6f9474"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.380007 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-kube-api-access-qt5x2" (OuterVolumeSpecName: "kube-api-access-qt5x2") pod "b9abdd72-5af1-4a9a-89a6-a36adf6f9474" (UID: "b9abdd72-5af1-4a9a-89a6-a36adf6f9474"). InnerVolumeSpecName "kube-api-access-qt5x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.477426 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.477462 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt5x2\" (UniqueName: \"kubernetes.io/projected/b9abdd72-5af1-4a9a-89a6-a36adf6f9474-kube-api-access-qt5x2\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.789425 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-s2qrx" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.789435 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-s2qrx" event={"ID":"2b003451-863f-480c-b319-47a28c6845c5","Type":"ContainerDied","Data":"548248f9055d83b86b261b8866d18ea2ef21d67d1fc14a9076ded2e82f9ffc3f"} Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.789491 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548248f9055d83b86b261b8866d18ea2ef21d67d1fc14a9076ded2e82f9ffc3f" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.791320 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" event={"ID":"b9abdd72-5af1-4a9a-89a6-a36adf6f9474","Type":"ContainerDied","Data":"5239a5246934b0a46a3fb2a8d68ae765b3341a3c0f839277ce90297aa2658188"} Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.791358 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5239a5246934b0a46a3fb2a8d68ae765b3341a3c0f839277ce90297aa2658188" Nov 26 13:37:09 crc kubenswrapper[4747]: I1126 13:37:09.791389 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a264-account-create-update-45t4z" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.606027 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-j4l9v"] Nov 26 13:37:11 crc kubenswrapper[4747]: E1126 13:37:11.606563 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9abdd72-5af1-4a9a-89a6-a36adf6f9474" containerName="mariadb-account-create-update" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.606575 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9abdd72-5af1-4a9a-89a6-a36adf6f9474" containerName="mariadb-account-create-update" Nov 26 13:37:11 crc kubenswrapper[4747]: E1126 13:37:11.606590 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b003451-863f-480c-b319-47a28c6845c5" containerName="mariadb-database-create" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.606596 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b003451-863f-480c-b319-47a28c6845c5" containerName="mariadb-database-create" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.606719 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b003451-863f-480c-b319-47a28c6845c5" containerName="mariadb-database-create" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.606737 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9abdd72-5af1-4a9a-89a6-a36adf6f9474" containerName="mariadb-account-create-update" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.607185 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.609023 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.610351 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-vsddm" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.610488 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.629490 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-j4l9v"] Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.711258 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-combined-ca-bundle\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.711300 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hltq\" (UniqueName: \"kubernetes.io/projected/20b47c49-9475-4be2-877f-efcb27dc7693-kube-api-access-8hltq\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.711356 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-db-sync-config-data\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.711384 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-config-data\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.812456 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-combined-ca-bundle\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.812503 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hltq\" (UniqueName: \"kubernetes.io/projected/20b47c49-9475-4be2-877f-efcb27dc7693-kube-api-access-8hltq\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.812555 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-db-sync-config-data\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.812580 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-config-data\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.818670 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-combined-ca-bundle\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.818771 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-config-data\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.819046 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-db-sync-config-data\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.830742 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hltq\" (UniqueName: \"kubernetes.io/projected/20b47c49-9475-4be2-877f-efcb27dc7693-kube-api-access-8hltq\") pod \"glance-db-sync-j4l9v\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:11 crc kubenswrapper[4747]: I1126 13:37:11.921440 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:12 crc kubenswrapper[4747]: I1126 13:37:12.149118 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-j4l9v"] Nov 26 13:37:12 crc kubenswrapper[4747]: I1126 13:37:12.813640 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-j4l9v" event={"ID":"20b47c49-9475-4be2-877f-efcb27dc7693","Type":"ContainerStarted","Data":"8d6bf9f18bdca3b4960fb44dc48f839859347411defeb2fa0cc5d331295fb20d"} Nov 26 13:37:23 crc kubenswrapper[4747]: I1126 13:37:23.911110 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-j4l9v" event={"ID":"20b47c49-9475-4be2-877f-efcb27dc7693","Type":"ContainerStarted","Data":"b2c13feae87847099a250004ba8d776e0127bfb5a2a95783b077b348ce09a35f"} Nov 26 13:37:23 crc kubenswrapper[4747]: I1126 13:37:23.937376 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-j4l9v" podStartSLOduration=2.5133753690000002 podStartE2EDuration="12.937355457s" podCreationTimestamp="2025-11-26 13:37:11 +0000 UTC" firstStartedPulling="2025-11-26 13:37:12.180042791 +0000 UTC m=+1319.166353806" lastFinishedPulling="2025-11-26 13:37:22.604022879 +0000 UTC m=+1329.590333894" observedRunningTime="2025-11-26 13:37:23.926901804 +0000 UTC m=+1330.913212819" watchObservedRunningTime="2025-11-26 13:37:23.937355457 +0000 UTC m=+1330.923666492" Nov 26 13:37:30 crc kubenswrapper[4747]: I1126 13:37:30.970555 4747 generic.go:334] "Generic (PLEG): container finished" podID="20b47c49-9475-4be2-877f-efcb27dc7693" containerID="b2c13feae87847099a250004ba8d776e0127bfb5a2a95783b077b348ce09a35f" exitCode=0 Nov 26 13:37:30 crc kubenswrapper[4747]: I1126 13:37:30.970581 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-j4l9v" event={"ID":"20b47c49-9475-4be2-877f-efcb27dc7693","Type":"ContainerDied","Data":"b2c13feae87847099a250004ba8d776e0127bfb5a2a95783b077b348ce09a35f"} Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.251319 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.429236 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-config-data\") pod \"20b47c49-9475-4be2-877f-efcb27dc7693\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.429304 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-combined-ca-bundle\") pod \"20b47c49-9475-4be2-877f-efcb27dc7693\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.429354 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-db-sync-config-data\") pod \"20b47c49-9475-4be2-877f-efcb27dc7693\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.429441 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hltq\" (UniqueName: \"kubernetes.io/projected/20b47c49-9475-4be2-877f-efcb27dc7693-kube-api-access-8hltq\") pod \"20b47c49-9475-4be2-877f-efcb27dc7693\" (UID: \"20b47c49-9475-4be2-877f-efcb27dc7693\") " Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.434081 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b47c49-9475-4be2-877f-efcb27dc7693-kube-api-access-8hltq" (OuterVolumeSpecName: "kube-api-access-8hltq") pod "20b47c49-9475-4be2-877f-efcb27dc7693" (UID: "20b47c49-9475-4be2-877f-efcb27dc7693"). InnerVolumeSpecName "kube-api-access-8hltq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.440377 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "20b47c49-9475-4be2-877f-efcb27dc7693" (UID: "20b47c49-9475-4be2-877f-efcb27dc7693"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.451353 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20b47c49-9475-4be2-877f-efcb27dc7693" (UID: "20b47c49-9475-4be2-877f-efcb27dc7693"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.471679 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-config-data" (OuterVolumeSpecName: "config-data") pod "20b47c49-9475-4be2-877f-efcb27dc7693" (UID: "20b47c49-9475-4be2-877f-efcb27dc7693"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.530695 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hltq\" (UniqueName: \"kubernetes.io/projected/20b47c49-9475-4be2-877f-efcb27dc7693-kube-api-access-8hltq\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.530735 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.530750 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.530761 4747 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/20b47c49-9475-4be2-877f-efcb27dc7693-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.991153 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-j4l9v" event={"ID":"20b47c49-9475-4be2-877f-efcb27dc7693","Type":"ContainerDied","Data":"8d6bf9f18bdca3b4960fb44dc48f839859347411defeb2fa0cc5d331295fb20d"} Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.991192 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6bf9f18bdca3b4960fb44dc48f839859347411defeb2fa0cc5d331295fb20d" Nov 26 13:37:32 crc kubenswrapper[4747]: I1126 13:37:32.991630 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-j4l9v" Nov 26 13:37:33 crc kubenswrapper[4747]: I1126 13:37:33.417499 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:37:33 crc kubenswrapper[4747]: I1126 13:37:33.417871 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.251765 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:37:34 crc kubenswrapper[4747]: E1126 13:37:34.252199 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b47c49-9475-4be2-877f-efcb27dc7693" containerName="glance-db-sync" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.252225 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b47c49-9475-4be2-877f-efcb27dc7693" containerName="glance-db-sync" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.252424 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b47c49-9475-4be2-877f-efcb27dc7693" containerName="glance-db-sync" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.253169 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.255447 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-internal-svc" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.255771 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.255984 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.255990 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.256831 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-vsddm" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.257401 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-public-svc" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.272999 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.359357 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-httpd-run\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.359399 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.359441 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frrr\" (UniqueName: \"kubernetes.io/projected/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-kube-api-access-9frrr\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.359474 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.359497 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-logs\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.359514 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.359537 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.359553 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.359575 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.460635 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-httpd-run\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.460682 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.460720 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9frrr\" (UniqueName: \"kubernetes.io/projected/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-kube-api-access-9frrr\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.460755 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.460781 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-logs\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.460797 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.460823 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.460838 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.460856 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.461811 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-httpd-run\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.462084 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.462088 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-logs\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.466247 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.466601 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.466859 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.467975 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.474395 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.484926 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9frrr\" (UniqueName: \"kubernetes.io/projected/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-kube-api-access-9frrr\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.485326 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-0\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:34 crc kubenswrapper[4747]: I1126 13:37:34.571339 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:35 crc kubenswrapper[4747]: I1126 13:37:35.036077 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:37:36 crc kubenswrapper[4747]: I1126 13:37:36.013179 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"993ef2a0-e47f-4007-827d-4a0d38fb3d6a","Type":"ContainerStarted","Data":"a43ed42145c4501c551feb9a7fa0991c1749e2021a3936da91c739d5964ff0bd"} Nov 26 13:37:36 crc kubenswrapper[4747]: I1126 13:37:36.013505 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"993ef2a0-e47f-4007-827d-4a0d38fb3d6a","Type":"ContainerStarted","Data":"ea15af4a2d533b7a44d5b2a4a54f00e16cd943b1e9862f16862ee66b967ed5ab"} Nov 26 13:37:36 crc kubenswrapper[4747]: I1126 13:37:36.013518 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"993ef2a0-e47f-4007-827d-4a0d38fb3d6a","Type":"ContainerStarted","Data":"6682e126e08772d218003705658f77ce505fbef120851f645c1fed7b17f194ca"} Nov 26 13:37:36 crc kubenswrapper[4747]: I1126 13:37:36.031301 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.031284468 podStartE2EDuration="2.031284468s" podCreationTimestamp="2025-11-26 13:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:37:36.029731962 +0000 UTC m=+1343.016042977" watchObservedRunningTime="2025-11-26 13:37:36.031284468 +0000 UTC m=+1343.017595483" Nov 26 13:37:44 crc kubenswrapper[4747]: I1126 13:37:44.573167 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:44 crc kubenswrapper[4747]: I1126 13:37:44.573692 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:44 crc kubenswrapper[4747]: I1126 13:37:44.613630 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:44 crc kubenswrapper[4747]: I1126 13:37:44.613702 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:45 crc kubenswrapper[4747]: I1126 13:37:45.082267 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:45 crc kubenswrapper[4747]: I1126 13:37:45.082307 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:47 crc kubenswrapper[4747]: I1126 13:37:47.036762 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:47 crc kubenswrapper[4747]: I1126 13:37:47.040189 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.104923 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-j4l9v"] Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.115996 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-j4l9v"] Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.121829 4747 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="glance-kuttl-tests/glance-default-single-0" secret="" err="secret \"glance-glance-dockercfg-vsddm\" not found" Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.165281 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glancea264-account-delete-c5mq9"] Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.166117 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.177837 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancea264-account-delete-c5mq9"] Nov 26 13:37:48 crc kubenswrapper[4747]: E1126 13:37:48.250397 4747 secret.go:188] Couldn't get secret glance-kuttl-tests/glance-scripts: secret "glance-scripts" not found Nov 26 13:37:48 crc kubenswrapper[4747]: E1126 13:37:48.250459 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts podName:993ef2a0-e47f-4007-827d-4a0d38fb3d6a nodeName:}" failed. No retries permitted until 2025-11-26 13:37:48.750441856 +0000 UTC m=+1355.736752871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts") pod "glance-default-single-0" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a") : secret "glance-scripts" not found Nov 26 13:37:48 crc kubenswrapper[4747]: E1126 13:37:48.250533 4747 secret.go:188] Couldn't get secret glance-kuttl-tests/glance-default-single-config-data: secret "glance-default-single-config-data" not found Nov 26 13:37:48 crc kubenswrapper[4747]: E1126 13:37:48.250560 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data podName:993ef2a0-e47f-4007-827d-4a0d38fb3d6a nodeName:}" failed. No retries permitted until 2025-11-26 13:37:48.750551489 +0000 UTC m=+1355.736862504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data") pod "glance-default-single-0" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a") : secret "glance-default-single-config-data" not found Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.295330 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.350841 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e2400-80e4-433a-bd68-18a7af24f795-operator-scripts\") pod \"glancea264-account-delete-c5mq9\" (UID: \"4d1e2400-80e4-433a-bd68-18a7af24f795\") " pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.351364 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjf4t\" (UniqueName: \"kubernetes.io/projected/4d1e2400-80e4-433a-bd68-18a7af24f795-kube-api-access-mjf4t\") pod \"glancea264-account-delete-c5mq9\" (UID: \"4d1e2400-80e4-433a-bd68-18a7af24f795\") " pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.453296 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjf4t\" (UniqueName: \"kubernetes.io/projected/4d1e2400-80e4-433a-bd68-18a7af24f795-kube-api-access-mjf4t\") pod \"glancea264-account-delete-c5mq9\" (UID: \"4d1e2400-80e4-433a-bd68-18a7af24f795\") " pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.453388 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e2400-80e4-433a-bd68-18a7af24f795-operator-scripts\") pod \"glancea264-account-delete-c5mq9\" (UID: \"4d1e2400-80e4-433a-bd68-18a7af24f795\") " pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.454154 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e2400-80e4-433a-bd68-18a7af24f795-operator-scripts\") pod \"glancea264-account-delete-c5mq9\" (UID: \"4d1e2400-80e4-433a-bd68-18a7af24f795\") " pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.476983 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjf4t\" (UniqueName: \"kubernetes.io/projected/4d1e2400-80e4-433a-bd68-18a7af24f795-kube-api-access-mjf4t\") pod \"glancea264-account-delete-c5mq9\" (UID: \"4d1e2400-80e4-433a-bd68-18a7af24f795\") " pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.494429 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" Nov 26 13:37:48 crc kubenswrapper[4747]: I1126 13:37:48.735808 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancea264-account-delete-c5mq9"] Nov 26 13:37:48 crc kubenswrapper[4747]: E1126 13:37:48.759557 4747 secret.go:188] Couldn't get secret glance-kuttl-tests/glance-default-single-config-data: secret "glance-default-single-config-data" not found Nov 26 13:37:48 crc kubenswrapper[4747]: E1126 13:37:48.759633 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data podName:993ef2a0-e47f-4007-827d-4a0d38fb3d6a nodeName:}" failed. No retries permitted until 2025-11-26 13:37:49.759619016 +0000 UTC m=+1356.745930031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data") pod "glance-default-single-0" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a") : secret "glance-default-single-config-data" not found Nov 26 13:37:48 crc kubenswrapper[4747]: E1126 13:37:48.760009 4747 secret.go:188] Couldn't get secret glance-kuttl-tests/glance-scripts: secret "glance-scripts" not found Nov 26 13:37:48 crc kubenswrapper[4747]: E1126 13:37:48.760045 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts podName:993ef2a0-e47f-4007-827d-4a0d38fb3d6a nodeName:}" failed. No retries permitted until 2025-11-26 13:37:49.760034276 +0000 UTC m=+1356.746345291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts") pod "glance-default-single-0" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a") : secret "glance-scripts" not found Nov 26 13:37:49 crc kubenswrapper[4747]: I1126 13:37:49.130515 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerName="glance-log" containerID="cri-o://ea15af4a2d533b7a44d5b2a4a54f00e16cd943b1e9862f16862ee66b967ed5ab" gracePeriod=30 Nov 26 13:37:49 crc kubenswrapper[4747]: I1126 13:37:49.131449 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" event={"ID":"4d1e2400-80e4-433a-bd68-18a7af24f795","Type":"ContainerStarted","Data":"09921062aed88cd083ad60529b58e93ebbe48c3431d50796d9fcb21766054739"} Nov 26 13:37:49 crc kubenswrapper[4747]: I1126 13:37:49.131473 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" event={"ID":"4d1e2400-80e4-433a-bd68-18a7af24f795","Type":"ContainerStarted","Data":"49a08a77046ef5f4ec17ec9395c929202cc48087b49b494a226d0a79fea943c9"} Nov 26 13:37:49 crc kubenswrapper[4747]: I1126 13:37:49.131693 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerName="glance-httpd" containerID="cri-o://a43ed42145c4501c551feb9a7fa0991c1749e2021a3936da91c739d5964ff0bd" gracePeriod=30 Nov 26 13:37:49 crc kubenswrapper[4747]: I1126 13:37:49.137618 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.102:9292/healthcheck\": EOF" Nov 26 13:37:49 crc kubenswrapper[4747]: I1126 13:37:49.137618 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.102:9292/healthcheck\": EOF" Nov 26 13:37:49 crc kubenswrapper[4747]: E1126 13:37:49.781334 4747 secret.go:188] Couldn't get secret glance-kuttl-tests/glance-scripts: secret "glance-scripts" not found Nov 26 13:37:49 crc kubenswrapper[4747]: E1126 13:37:49.781686 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts podName:993ef2a0-e47f-4007-827d-4a0d38fb3d6a nodeName:}" failed. No retries permitted until 2025-11-26 13:37:51.781664856 +0000 UTC m=+1358.767975871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts") pod "glance-default-single-0" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a") : secret "glance-scripts" not found Nov 26 13:37:49 crc kubenswrapper[4747]: E1126 13:37:49.781338 4747 secret.go:188] Couldn't get secret glance-kuttl-tests/glance-default-single-config-data: secret "glance-default-single-config-data" not found Nov 26 13:37:49 crc kubenswrapper[4747]: E1126 13:37:49.781775 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data podName:993ef2a0-e47f-4007-827d-4a0d38fb3d6a nodeName:}" failed. No retries permitted until 2025-11-26 13:37:51.781762529 +0000 UTC m=+1358.768073544 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data") pod "glance-default-single-0" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a") : secret "glance-default-single-config-data" not found Nov 26 13:37:49 crc kubenswrapper[4747]: I1126 13:37:49.807006 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b47c49-9475-4be2-877f-efcb27dc7693" path="/var/lib/kubelet/pods/20b47c49-9475-4be2-877f-efcb27dc7693/volumes" Nov 26 13:37:50 crc kubenswrapper[4747]: I1126 13:37:50.138764 4747 generic.go:334] "Generic (PLEG): container finished" podID="4d1e2400-80e4-433a-bd68-18a7af24f795" containerID="09921062aed88cd083ad60529b58e93ebbe48c3431d50796d9fcb21766054739" exitCode=0 Nov 26 13:37:50 crc kubenswrapper[4747]: I1126 13:37:50.138814 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" event={"ID":"4d1e2400-80e4-433a-bd68-18a7af24f795","Type":"ContainerDied","Data":"09921062aed88cd083ad60529b58e93ebbe48c3431d50796d9fcb21766054739"} Nov 26 13:37:50 crc kubenswrapper[4747]: I1126 13:37:50.140673 4747 generic.go:334] "Generic (PLEG): container finished" podID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerID="ea15af4a2d533b7a44d5b2a4a54f00e16cd943b1e9862f16862ee66b967ed5ab" exitCode=143 Nov 26 13:37:50 crc kubenswrapper[4747]: I1126 13:37:50.140704 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"993ef2a0-e47f-4007-827d-4a0d38fb3d6a","Type":"ContainerDied","Data":"ea15af4a2d533b7a44d5b2a4a54f00e16cd943b1e9862f16862ee66b967ed5ab"} Nov 26 13:37:51 crc kubenswrapper[4747]: I1126 13:37:51.445472 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" Nov 26 13:37:51 crc kubenswrapper[4747]: I1126 13:37:51.606723 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjf4t\" (UniqueName: \"kubernetes.io/projected/4d1e2400-80e4-433a-bd68-18a7af24f795-kube-api-access-mjf4t\") pod \"4d1e2400-80e4-433a-bd68-18a7af24f795\" (UID: \"4d1e2400-80e4-433a-bd68-18a7af24f795\") " Nov 26 13:37:51 crc kubenswrapper[4747]: I1126 13:37:51.606953 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e2400-80e4-433a-bd68-18a7af24f795-operator-scripts\") pod \"4d1e2400-80e4-433a-bd68-18a7af24f795\" (UID: \"4d1e2400-80e4-433a-bd68-18a7af24f795\") " Nov 26 13:37:51 crc kubenswrapper[4747]: I1126 13:37:51.608567 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d1e2400-80e4-433a-bd68-18a7af24f795-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d1e2400-80e4-433a-bd68-18a7af24f795" (UID: "4d1e2400-80e4-433a-bd68-18a7af24f795"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:37:51 crc kubenswrapper[4747]: I1126 13:37:51.630505 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1e2400-80e4-433a-bd68-18a7af24f795-kube-api-access-mjf4t" (OuterVolumeSpecName: "kube-api-access-mjf4t") pod "4d1e2400-80e4-433a-bd68-18a7af24f795" (UID: "4d1e2400-80e4-433a-bd68-18a7af24f795"). InnerVolumeSpecName "kube-api-access-mjf4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:37:51 crc kubenswrapper[4747]: I1126 13:37:51.709077 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e2400-80e4-433a-bd68-18a7af24f795-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:51 crc kubenswrapper[4747]: I1126 13:37:51.709114 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjf4t\" (UniqueName: \"kubernetes.io/projected/4d1e2400-80e4-433a-bd68-18a7af24f795-kube-api-access-mjf4t\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:51 crc kubenswrapper[4747]: E1126 13:37:51.811135 4747 secret.go:188] Couldn't get secret glance-kuttl-tests/glance-default-single-config-data: secret "glance-default-single-config-data" not found Nov 26 13:37:51 crc kubenswrapper[4747]: E1126 13:37:51.811449 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data podName:993ef2a0-e47f-4007-827d-4a0d38fb3d6a nodeName:}" failed. No retries permitted until 2025-11-26 13:37:55.811432102 +0000 UTC m=+1362.797743127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data") pod "glance-default-single-0" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a") : secret "glance-default-single-config-data" not found Nov 26 13:37:51 crc kubenswrapper[4747]: E1126 13:37:51.813505 4747 secret.go:188] Couldn't get secret glance-kuttl-tests/glance-scripts: secret "glance-scripts" not found Nov 26 13:37:51 crc kubenswrapper[4747]: E1126 13:37:51.813585 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts podName:993ef2a0-e47f-4007-827d-4a0d38fb3d6a nodeName:}" failed. No retries permitted until 2025-11-26 13:37:55.813567466 +0000 UTC m=+1362.799878491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts") pod "glance-default-single-0" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a") : secret "glance-scripts" not found Nov 26 13:37:52 crc kubenswrapper[4747]: I1126 13:37:52.156018 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" event={"ID":"4d1e2400-80e4-433a-bd68-18a7af24f795","Type":"ContainerDied","Data":"49a08a77046ef5f4ec17ec9395c929202cc48087b49b494a226d0a79fea943c9"} Nov 26 13:37:52 crc kubenswrapper[4747]: I1126 13:37:52.156073 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49a08a77046ef5f4ec17ec9395c929202cc48087b49b494a226d0a79fea943c9" Nov 26 13:37:52 crc kubenswrapper[4747]: I1126 13:37:52.156198 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea264-account-delete-c5mq9" Nov 26 13:37:53 crc kubenswrapper[4747]: I1126 13:37:53.187632 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-s2qrx"] Nov 26 13:37:53 crc kubenswrapper[4747]: I1126 13:37:53.193630 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-s2qrx"] Nov 26 13:37:53 crc kubenswrapper[4747]: I1126 13:37:53.200028 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glancea264-account-delete-c5mq9"] Nov 26 13:37:53 crc kubenswrapper[4747]: I1126 13:37:53.205180 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-a264-account-create-update-45t4z"] Nov 26 13:37:53 crc kubenswrapper[4747]: I1126 13:37:53.210228 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-a264-account-create-update-45t4z"] Nov 26 13:37:53 crc kubenswrapper[4747]: I1126 13:37:53.214557 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glancea264-account-delete-c5mq9"] Nov 26 13:37:53 crc kubenswrapper[4747]: I1126 13:37:53.808538 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b003451-863f-480c-b319-47a28c6845c5" path="/var/lib/kubelet/pods/2b003451-863f-480c-b319-47a28c6845c5/volumes" Nov 26 13:37:53 crc kubenswrapper[4747]: I1126 13:37:53.809095 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1e2400-80e4-433a-bd68-18a7af24f795" path="/var/lib/kubelet/pods/4d1e2400-80e4-433a-bd68-18a7af24f795/volumes" Nov 26 13:37:53 crc kubenswrapper[4747]: I1126 13:37:53.809615 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9abdd72-5af1-4a9a-89a6-a36adf6f9474" path="/var/lib/kubelet/pods/b9abdd72-5af1-4a9a-89a6-a36adf6f9474/volumes" Nov 26 13:37:55 crc kubenswrapper[4747]: E1126 13:37:55.877640 4747 secret.go:188] Couldn't get secret glance-kuttl-tests/glance-default-single-config-data: secret "glance-default-single-config-data" not found Nov 26 13:37:55 crc kubenswrapper[4747]: E1126 13:37:55.877670 4747 secret.go:188] Couldn't get secret glance-kuttl-tests/glance-scripts: secret "glance-scripts" not found Nov 26 13:37:55 crc kubenswrapper[4747]: E1126 13:37:55.877980 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data podName:993ef2a0-e47f-4007-827d-4a0d38fb3d6a nodeName:}" failed. No retries permitted until 2025-11-26 13:38:03.877960448 +0000 UTC m=+1370.864271463 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data") pod "glance-default-single-0" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a") : secret "glance-default-single-config-data" not found Nov 26 13:37:55 crc kubenswrapper[4747]: E1126 13:37:55.878068 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts podName:993ef2a0-e47f-4007-827d-4a0d38fb3d6a nodeName:}" failed. No retries permitted until 2025-11-26 13:38:03.87803527 +0000 UTC m=+1370.864346285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts") pod "glance-default-single-0" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a") : secret "glance-scripts" not found Nov 26 13:37:56 crc kubenswrapper[4747]: I1126 13:37:56.195181 4747 generic.go:334] "Generic (PLEG): container finished" podID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerID="a43ed42145c4501c551feb9a7fa0991c1749e2021a3936da91c739d5964ff0bd" exitCode=0 Nov 26 13:37:56 crc kubenswrapper[4747]: I1126 13:37:56.195234 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"993ef2a0-e47f-4007-827d-4a0d38fb3d6a","Type":"ContainerDied","Data":"a43ed42145c4501c551feb9a7fa0991c1749e2021a3936da91c739d5964ff0bd"} Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.167140 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.197029 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data\") pod \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.197144 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-combined-ca-bundle\") pod \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.197191 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-logs\") pod \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.197234 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-internal-tls-certs\") pod \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.197265 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9frrr\" (UniqueName: \"kubernetes.io/projected/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-kube-api-access-9frrr\") pod \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.197287 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts\") pod \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.197338 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.197370 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-httpd-run\") pod \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.197412 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-public-tls-certs\") pod \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\" (UID: \"993ef2a0-e47f-4007-827d-4a0d38fb3d6a\") " Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.198681 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "993ef2a0-e47f-4007-827d-4a0d38fb3d6a" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.198809 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-logs" (OuterVolumeSpecName: "logs") pod "993ef2a0-e47f-4007-827d-4a0d38fb3d6a" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.202521 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts" (OuterVolumeSpecName: "scripts") pod "993ef2a0-e47f-4007-827d-4a0d38fb3d6a" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.202907 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance") pod "993ef2a0-e47f-4007-827d-4a0d38fb3d6a" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.203124 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-kube-api-access-9frrr" (OuterVolumeSpecName: "kube-api-access-9frrr") pod "993ef2a0-e47f-4007-827d-4a0d38fb3d6a" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a"). InnerVolumeSpecName "kube-api-access-9frrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.206884 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"993ef2a0-e47f-4007-827d-4a0d38fb3d6a","Type":"ContainerDied","Data":"6682e126e08772d218003705658f77ce505fbef120851f645c1fed7b17f194ca"} Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.207109 4747 scope.go:117] "RemoveContainer" containerID="a43ed42145c4501c551feb9a7fa0991c1749e2021a3936da91c739d5964ff0bd" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.207340 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.225594 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "993ef2a0-e47f-4007-827d-4a0d38fb3d6a" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.233078 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data" (OuterVolumeSpecName: "config-data") pod "993ef2a0-e47f-4007-827d-4a0d38fb3d6a" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.237045 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "993ef2a0-e47f-4007-827d-4a0d38fb3d6a" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.243702 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "993ef2a0-e47f-4007-827d-4a0d38fb3d6a" (UID: "993ef2a0-e47f-4007-827d-4a0d38fb3d6a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.292271 4747 scope.go:117] "RemoveContainer" containerID="ea15af4a2d533b7a44d5b2a4a54f00e16cd943b1e9862f16862ee66b967ed5ab" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.299238 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.299301 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.299318 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.299331 4747 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.299342 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9frrr\" (UniqueName: \"kubernetes.io/projected/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-kube-api-access-9frrr\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.299353 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.299392 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.299406 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.299417 4747 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/993ef2a0-e47f-4007-827d-4a0d38fb3d6a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.314565 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.401268 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.542330 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.548532 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:37:57 crc kubenswrapper[4747]: I1126 13:37:57.808297 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" path="/var/lib/kubelet/pods/993ef2a0-e47f-4007-827d-4a0d38fb3d6a/volumes" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.957964 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-lvrzx"] Nov 26 13:37:59 crc kubenswrapper[4747]: E1126 13:37:59.958544 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1e2400-80e4-433a-bd68-18a7af24f795" containerName="mariadb-account-delete" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.958557 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1e2400-80e4-433a-bd68-18a7af24f795" containerName="mariadb-account-delete" Nov 26 13:37:59 crc kubenswrapper[4747]: E1126 13:37:59.958584 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerName="glance-log" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.958591 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerName="glance-log" Nov 26 13:37:59 crc kubenswrapper[4747]: E1126 13:37:59.958604 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerName="glance-httpd" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.958610 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerName="glance-httpd" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.958755 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerName="glance-httpd" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.958768 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="993ef2a0-e47f-4007-827d-4a0d38fb3d6a" containerName="glance-log" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.958777 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1e2400-80e4-433a-bd68-18a7af24f795" containerName="mariadb-account-delete" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.959257 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-lvrzx" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.975152 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc"] Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.976332 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.979446 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 26 13:37:59 crc kubenswrapper[4747]: I1126 13:37:59.981576 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-lvrzx"] Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.002000 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc"] Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.043689 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmt9\" (UniqueName: \"kubernetes.io/projected/0894d3e0-e4ac-46c1-b040-71bb743e4403-kube-api-access-dcmt9\") pod \"glance-6b4d-account-create-update-j5bxc\" (UID: \"0894d3e0-e4ac-46c1-b040-71bb743e4403\") " pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.043813 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0894d3e0-e4ac-46c1-b040-71bb743e4403-operator-scripts\") pod \"glance-6b4d-account-create-update-j5bxc\" (UID: \"0894d3e0-e4ac-46c1-b040-71bb743e4403\") " pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.043848 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phmnq\" (UniqueName: \"kubernetes.io/projected/0f49789e-2753-45b8-98d0-2bfb826f476f-kube-api-access-phmnq\") pod \"glance-db-create-lvrzx\" (UID: \"0f49789e-2753-45b8-98d0-2bfb826f476f\") " pod="glance-kuttl-tests/glance-db-create-lvrzx" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.043892 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f49789e-2753-45b8-98d0-2bfb826f476f-operator-scripts\") pod \"glance-db-create-lvrzx\" (UID: \"0f49789e-2753-45b8-98d0-2bfb826f476f\") " pod="glance-kuttl-tests/glance-db-create-lvrzx" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.145138 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcmt9\" (UniqueName: \"kubernetes.io/projected/0894d3e0-e4ac-46c1-b040-71bb743e4403-kube-api-access-dcmt9\") pod \"glance-6b4d-account-create-update-j5bxc\" (UID: \"0894d3e0-e4ac-46c1-b040-71bb743e4403\") " pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.145249 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0894d3e0-e4ac-46c1-b040-71bb743e4403-operator-scripts\") pod \"glance-6b4d-account-create-update-j5bxc\" (UID: \"0894d3e0-e4ac-46c1-b040-71bb743e4403\") " pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.145279 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phmnq\" (UniqueName: \"kubernetes.io/projected/0f49789e-2753-45b8-98d0-2bfb826f476f-kube-api-access-phmnq\") pod \"glance-db-create-lvrzx\" (UID: \"0f49789e-2753-45b8-98d0-2bfb826f476f\") " pod="glance-kuttl-tests/glance-db-create-lvrzx" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.145370 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f49789e-2753-45b8-98d0-2bfb826f476f-operator-scripts\") pod \"glance-db-create-lvrzx\" (UID: \"0f49789e-2753-45b8-98d0-2bfb826f476f\") " pod="glance-kuttl-tests/glance-db-create-lvrzx" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.146133 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0894d3e0-e4ac-46c1-b040-71bb743e4403-operator-scripts\") pod \"glance-6b4d-account-create-update-j5bxc\" (UID: \"0894d3e0-e4ac-46c1-b040-71bb743e4403\") " pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.146473 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f49789e-2753-45b8-98d0-2bfb826f476f-operator-scripts\") pod \"glance-db-create-lvrzx\" (UID: \"0f49789e-2753-45b8-98d0-2bfb826f476f\") " pod="glance-kuttl-tests/glance-db-create-lvrzx" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.167605 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phmnq\" (UniqueName: \"kubernetes.io/projected/0f49789e-2753-45b8-98d0-2bfb826f476f-kube-api-access-phmnq\") pod \"glance-db-create-lvrzx\" (UID: \"0f49789e-2753-45b8-98d0-2bfb826f476f\") " pod="glance-kuttl-tests/glance-db-create-lvrzx" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.167618 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcmt9\" (UniqueName: \"kubernetes.io/projected/0894d3e0-e4ac-46c1-b040-71bb743e4403-kube-api-access-dcmt9\") pod \"glance-6b4d-account-create-update-j5bxc\" (UID: \"0894d3e0-e4ac-46c1-b040-71bb743e4403\") " pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.276308 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-lvrzx" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.307225 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.694957 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-lvrzx"] Nov 26 13:38:00 crc kubenswrapper[4747]: I1126 13:38:00.779821 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc"] Nov 26 13:38:00 crc kubenswrapper[4747]: W1126 13:38:00.787658 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0894d3e0_e4ac_46c1_b040_71bb743e4403.slice/crio-65b6c65ac9c2cd4284d0e0e44120301c6011ea098339b76aede4fd7d41963d7b WatchSource:0}: Error finding container 65b6c65ac9c2cd4284d0e0e44120301c6011ea098339b76aede4fd7d41963d7b: Status 404 returned error can't find the container with id 65b6c65ac9c2cd4284d0e0e44120301c6011ea098339b76aede4fd7d41963d7b Nov 26 13:38:01 crc kubenswrapper[4747]: I1126 13:38:01.248389 4747 generic.go:334] "Generic (PLEG): container finished" podID="0f49789e-2753-45b8-98d0-2bfb826f476f" containerID="1f5229f57606a42c709ed24c531331d56023e0f638ec6de43dcf6e13cfe7efb3" exitCode=0 Nov 26 13:38:01 crc kubenswrapper[4747]: I1126 13:38:01.248495 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-lvrzx" event={"ID":"0f49789e-2753-45b8-98d0-2bfb826f476f","Type":"ContainerDied","Data":"1f5229f57606a42c709ed24c531331d56023e0f638ec6de43dcf6e13cfe7efb3"} Nov 26 13:38:01 crc kubenswrapper[4747]: I1126 13:38:01.248892 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-lvrzx" event={"ID":"0f49789e-2753-45b8-98d0-2bfb826f476f","Type":"ContainerStarted","Data":"37637055f3cb1f1cb936085225b7989cb715dedf688c0620ea13338d9dac8aa9"} Nov 26 13:38:01 crc kubenswrapper[4747]: I1126 13:38:01.250380 4747 generic.go:334] "Generic (PLEG): container finished" podID="0894d3e0-e4ac-46c1-b040-71bb743e4403" containerID="13c86963611eeb04c6360f4cbd785ace25f6213706702da76d6f1eebcaa94a1e" exitCode=0 Nov 26 13:38:01 crc kubenswrapper[4747]: I1126 13:38:01.250409 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" event={"ID":"0894d3e0-e4ac-46c1-b040-71bb743e4403","Type":"ContainerDied","Data":"13c86963611eeb04c6360f4cbd785ace25f6213706702da76d6f1eebcaa94a1e"} Nov 26 13:38:01 crc kubenswrapper[4747]: I1126 13:38:01.250485 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" event={"ID":"0894d3e0-e4ac-46c1-b040-71bb743e4403","Type":"ContainerStarted","Data":"65b6c65ac9c2cd4284d0e0e44120301c6011ea098339b76aede4fd7d41963d7b"} Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.603105 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-lvrzx" Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.607110 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.791169 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcmt9\" (UniqueName: \"kubernetes.io/projected/0894d3e0-e4ac-46c1-b040-71bb743e4403-kube-api-access-dcmt9\") pod \"0894d3e0-e4ac-46c1-b040-71bb743e4403\" (UID: \"0894d3e0-e4ac-46c1-b040-71bb743e4403\") " Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.791250 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phmnq\" (UniqueName: \"kubernetes.io/projected/0f49789e-2753-45b8-98d0-2bfb826f476f-kube-api-access-phmnq\") pod \"0f49789e-2753-45b8-98d0-2bfb826f476f\" (UID: \"0f49789e-2753-45b8-98d0-2bfb826f476f\") " Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.791329 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0894d3e0-e4ac-46c1-b040-71bb743e4403-operator-scripts\") pod \"0894d3e0-e4ac-46c1-b040-71bb743e4403\" (UID: \"0894d3e0-e4ac-46c1-b040-71bb743e4403\") " Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.791391 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f49789e-2753-45b8-98d0-2bfb826f476f-operator-scripts\") pod \"0f49789e-2753-45b8-98d0-2bfb826f476f\" (UID: \"0f49789e-2753-45b8-98d0-2bfb826f476f\") " Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.791861 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0894d3e0-e4ac-46c1-b040-71bb743e4403-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0894d3e0-e4ac-46c1-b040-71bb743e4403" (UID: "0894d3e0-e4ac-46c1-b040-71bb743e4403"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.791896 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f49789e-2753-45b8-98d0-2bfb826f476f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f49789e-2753-45b8-98d0-2bfb826f476f" (UID: "0f49789e-2753-45b8-98d0-2bfb826f476f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.796670 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0894d3e0-e4ac-46c1-b040-71bb743e4403-kube-api-access-dcmt9" (OuterVolumeSpecName: "kube-api-access-dcmt9") pod "0894d3e0-e4ac-46c1-b040-71bb743e4403" (UID: "0894d3e0-e4ac-46c1-b040-71bb743e4403"). InnerVolumeSpecName "kube-api-access-dcmt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.798073 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f49789e-2753-45b8-98d0-2bfb826f476f-kube-api-access-phmnq" (OuterVolumeSpecName: "kube-api-access-phmnq") pod "0f49789e-2753-45b8-98d0-2bfb826f476f" (UID: "0f49789e-2753-45b8-98d0-2bfb826f476f"). InnerVolumeSpecName "kube-api-access-phmnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.893695 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcmt9\" (UniqueName: \"kubernetes.io/projected/0894d3e0-e4ac-46c1-b040-71bb743e4403-kube-api-access-dcmt9\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.893733 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phmnq\" (UniqueName: \"kubernetes.io/projected/0f49789e-2753-45b8-98d0-2bfb826f476f-kube-api-access-phmnq\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.893744 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0894d3e0-e4ac-46c1-b040-71bb743e4403-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:02 crc kubenswrapper[4747]: I1126 13:38:02.893753 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f49789e-2753-45b8-98d0-2bfb826f476f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.275124 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" event={"ID":"0894d3e0-e4ac-46c1-b040-71bb743e4403","Type":"ContainerDied","Data":"65b6c65ac9c2cd4284d0e0e44120301c6011ea098339b76aede4fd7d41963d7b"} Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.276033 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65b6c65ac9c2cd4284d0e0e44120301c6011ea098339b76aede4fd7d41963d7b" Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.275467 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc" Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.277497 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-lvrzx" event={"ID":"0f49789e-2753-45b8-98d0-2bfb826f476f","Type":"ContainerDied","Data":"37637055f3cb1f1cb936085225b7989cb715dedf688c0620ea13338d9dac8aa9"} Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.277528 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-lvrzx" Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.277547 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37637055f3cb1f1cb936085225b7989cb715dedf688c0620ea13338d9dac8aa9" Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.418358 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.418854 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.419129 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.419893 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30240290fdfc14964fe95aadfb3a6164f4e3e5dee1aed61ae4404c0699f012b8"} pod="openshift-machine-config-operator/machine-config-daemon-hjc55" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:38:03 crc kubenswrapper[4747]: I1126 13:38:03.420077 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" containerID="cri-o://30240290fdfc14964fe95aadfb3a6164f4e3e5dee1aed61ae4404c0699f012b8" gracePeriod=600 Nov 26 13:38:04 crc kubenswrapper[4747]: I1126 13:38:04.288583 4747 generic.go:334] "Generic (PLEG): container finished" podID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerID="30240290fdfc14964fe95aadfb3a6164f4e3e5dee1aed61ae4404c0699f012b8" exitCode=0 Nov 26 13:38:04 crc kubenswrapper[4747]: I1126 13:38:04.288696 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerDied","Data":"30240290fdfc14964fe95aadfb3a6164f4e3e5dee1aed61ae4404c0699f012b8"} Nov 26 13:38:04 crc kubenswrapper[4747]: I1126 13:38:04.289284 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80"} Nov 26 13:38:04 crc kubenswrapper[4747]: I1126 13:38:04.289323 4747 scope.go:117] "RemoveContainer" containerID="bb44baaeaa989e478f325c470fbc512f8c7deb6097dda428644ec386db6331c1" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.102740 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-5w595"] Nov 26 13:38:05 crc kubenswrapper[4747]: E1126 13:38:05.103326 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0894d3e0-e4ac-46c1-b040-71bb743e4403" containerName="mariadb-account-create-update" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.103344 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0894d3e0-e4ac-46c1-b040-71bb743e4403" containerName="mariadb-account-create-update" Nov 26 13:38:05 crc kubenswrapper[4747]: E1126 13:38:05.103367 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f49789e-2753-45b8-98d0-2bfb826f476f" containerName="mariadb-database-create" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.103373 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f49789e-2753-45b8-98d0-2bfb826f476f" containerName="mariadb-database-create" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.103500 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f49789e-2753-45b8-98d0-2bfb826f476f" containerName="mariadb-database-create" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.103523 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0894d3e0-e4ac-46c1-b040-71bb743e4403" containerName="mariadb-account-create-update" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.103951 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.106940 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.107248 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-5j578" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.110472 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-5w595"] Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.126489 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-db-sync-config-data\") pod \"glance-db-sync-5w595\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.126559 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-config-data\") pod \"glance-db-sync-5w595\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.126595 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8bz\" (UniqueName: \"kubernetes.io/projected/4208ff9a-2bce-4b42-9462-f612783e629a-kube-api-access-vd8bz\") pod \"glance-db-sync-5w595\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.228226 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-db-sync-config-data\") pod \"glance-db-sync-5w595\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.228290 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-config-data\") pod \"glance-db-sync-5w595\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.228312 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8bz\" (UniqueName: \"kubernetes.io/projected/4208ff9a-2bce-4b42-9462-f612783e629a-kube-api-access-vd8bz\") pod \"glance-db-sync-5w595\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.235271 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-db-sync-config-data\") pod \"glance-db-sync-5w595\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.235389 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-config-data\") pod \"glance-db-sync-5w595\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.255408 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8bz\" (UniqueName: \"kubernetes.io/projected/4208ff9a-2bce-4b42-9462-f612783e629a-kube-api-access-vd8bz\") pod \"glance-db-sync-5w595\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.427286 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:05 crc kubenswrapper[4747]: I1126 13:38:05.860424 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-5w595"] Nov 26 13:38:06 crc kubenswrapper[4747]: I1126 13:38:06.307762 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-5w595" event={"ID":"4208ff9a-2bce-4b42-9462-f612783e629a","Type":"ContainerStarted","Data":"d23f495eb88df2885d2fff504844b424c671058d290680f08f3765120371328a"} Nov 26 13:38:07 crc kubenswrapper[4747]: I1126 13:38:07.328793 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-5w595" event={"ID":"4208ff9a-2bce-4b42-9462-f612783e629a","Type":"ContainerStarted","Data":"cc44cfb85cf287b24227e8d8c63ae7c3e15268ef183bbe3e96bbbcf8e1e8230e"} Nov 26 13:38:07 crc kubenswrapper[4747]: I1126 13:38:07.347690 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-5w595" podStartSLOduration=2.347669538 podStartE2EDuration="2.347669538s" podCreationTimestamp="2025-11-26 13:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:38:07.344480998 +0000 UTC m=+1374.330792013" watchObservedRunningTime="2025-11-26 13:38:07.347669538 +0000 UTC m=+1374.333980563" Nov 26 13:38:09 crc kubenswrapper[4747]: I1126 13:38:09.343210 4747 generic.go:334] "Generic (PLEG): container finished" podID="4208ff9a-2bce-4b42-9462-f612783e629a" containerID="cc44cfb85cf287b24227e8d8c63ae7c3e15268ef183bbe3e96bbbcf8e1e8230e" exitCode=0 Nov 26 13:38:09 crc kubenswrapper[4747]: I1126 13:38:09.343319 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-5w595" event={"ID":"4208ff9a-2bce-4b42-9462-f612783e629a","Type":"ContainerDied","Data":"cc44cfb85cf287b24227e8d8c63ae7c3e15268ef183bbe3e96bbbcf8e1e8230e"} Nov 26 13:38:10 crc kubenswrapper[4747]: I1126 13:38:10.620578 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:10 crc kubenswrapper[4747]: I1126 13:38:10.803089 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd8bz\" (UniqueName: \"kubernetes.io/projected/4208ff9a-2bce-4b42-9462-f612783e629a-kube-api-access-vd8bz\") pod \"4208ff9a-2bce-4b42-9462-f612783e629a\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " Nov 26 13:38:10 crc kubenswrapper[4747]: I1126 13:38:10.803530 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-config-data\") pod \"4208ff9a-2bce-4b42-9462-f612783e629a\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " Nov 26 13:38:10 crc kubenswrapper[4747]: I1126 13:38:10.803621 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-db-sync-config-data\") pod \"4208ff9a-2bce-4b42-9462-f612783e629a\" (UID: \"4208ff9a-2bce-4b42-9462-f612783e629a\") " Nov 26 13:38:10 crc kubenswrapper[4747]: I1126 13:38:10.808969 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4208ff9a-2bce-4b42-9462-f612783e629a" (UID: "4208ff9a-2bce-4b42-9462-f612783e629a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:10 crc kubenswrapper[4747]: I1126 13:38:10.809613 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4208ff9a-2bce-4b42-9462-f612783e629a-kube-api-access-vd8bz" (OuterVolumeSpecName: "kube-api-access-vd8bz") pod "4208ff9a-2bce-4b42-9462-f612783e629a" (UID: "4208ff9a-2bce-4b42-9462-f612783e629a"). InnerVolumeSpecName "kube-api-access-vd8bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:10 crc kubenswrapper[4747]: I1126 13:38:10.841446 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-config-data" (OuterVolumeSpecName: "config-data") pod "4208ff9a-2bce-4b42-9462-f612783e629a" (UID: "4208ff9a-2bce-4b42-9462-f612783e629a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:10 crc kubenswrapper[4747]: I1126 13:38:10.904989 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd8bz\" (UniqueName: \"kubernetes.io/projected/4208ff9a-2bce-4b42-9462-f612783e629a-kube-api-access-vd8bz\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:10 crc kubenswrapper[4747]: I1126 13:38:10.905027 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:10 crc kubenswrapper[4747]: I1126 13:38:10.905040 4747 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4208ff9a-2bce-4b42-9462-f612783e629a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:11 crc kubenswrapper[4747]: I1126 13:38:11.359926 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-5w595" event={"ID":"4208ff9a-2bce-4b42-9462-f612783e629a","Type":"ContainerDied","Data":"d23f495eb88df2885d2fff504844b424c671058d290680f08f3765120371328a"} Nov 26 13:38:11 crc kubenswrapper[4747]: I1126 13:38:11.359968 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d23f495eb88df2885d2fff504844b424c671058d290680f08f3765120371328a" Nov 26 13:38:11 crc kubenswrapper[4747]: I1126 13:38:11.359995 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-5w595" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.637601 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:38:12 crc kubenswrapper[4747]: E1126 13:38:12.637865 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4208ff9a-2bce-4b42-9462-f612783e629a" containerName="glance-db-sync" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.637875 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4208ff9a-2bce-4b42-9462-f612783e629a" containerName="glance-db-sync" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.638013 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="4208ff9a-2bce-4b42-9462-f612783e629a" containerName="glance-db-sync" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.638937 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.641035 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-5j578" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.641489 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.641797 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.658194 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.741746 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.743379 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.746231 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.760127 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828223 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828307 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-config-data\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828333 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-dev\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828358 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-logs\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828389 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x4jj\" (UniqueName: \"kubernetes.io/projected/aab468be-bedd-4414-a0fe-79aeec7b6bcf-kube-api-access-2x4jj\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828443 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828468 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-sys\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828488 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828507 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828537 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828566 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-run\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828603 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828640 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.828670 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-scripts\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930094 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-run\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930159 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-dev\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930179 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930207 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930230 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930248 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930285 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-run\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930299 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930384 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930421 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-scripts\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930467 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930527 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck6wg\" (UniqueName: \"kubernetes.io/projected/06a6f09d-aff0-41fc-9838-b546e7aa4392-kube-api-access-ck6wg\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.930781 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.931129 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.931331 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.931407 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-sys\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.931459 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.931516 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.931590 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-run\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.931644 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.931712 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-config-data\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.931949 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-dev\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932082 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x4jj\" (UniqueName: \"kubernetes.io/projected/aab468be-bedd-4414-a0fe-79aeec7b6bcf-kube-api-access-2x4jj\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932120 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-dev\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932125 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-logs\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932177 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932263 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932305 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-logs\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932343 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932375 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-sys\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932401 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932432 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932454 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932466 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-sys\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932631 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-logs\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932648 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932682 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932703 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.932712 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.937613 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-config-data\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.940190 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-scripts\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.960725 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x4jj\" (UniqueName: \"kubernetes.io/projected/aab468be-bedd-4414-a0fe-79aeec7b6bcf-kube-api-access-2x4jj\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.964265 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:12 crc kubenswrapper[4747]: I1126 13:38:12.967477 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-0\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034078 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034166 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034218 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck6wg\" (UniqueName: \"kubernetes.io/projected/06a6f09d-aff0-41fc-9838-b546e7aa4392-kube-api-access-ck6wg\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034254 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034286 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-sys\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034288 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034331 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-run\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034366 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034415 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034475 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034510 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-logs\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034597 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-dev\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034632 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034652 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-sys\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034681 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034718 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034772 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034510 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034716 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-run\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.034841 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-dev\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.035236 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.035614 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.035608 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.035890 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.038538 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-logs\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.039280 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.041393 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.056886 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.060657 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.060981 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck6wg\" (UniqueName: \"kubernetes.io/projected/06a6f09d-aff0-41fc-9838-b546e7aa4392-kube-api-access-ck6wg\") pod \"glance-default-internal-api-0\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.266211 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.356357 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.466579 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.730706 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:38:13 crc kubenswrapper[4747]: W1126 13:38:13.841454 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06a6f09d_aff0_41fc_9838_b546e7aa4392.slice/crio-56e89ead1a894fb8eec1965484ceb56897151840711cd43745177ecdca92aa0e WatchSource:0}: Error finding container 56e89ead1a894fb8eec1965484ceb56897151840711cd43745177ecdca92aa0e: Status 404 returned error can't find the container with id 56e89ead1a894fb8eec1965484ceb56897151840711cd43745177ecdca92aa0e Nov 26 13:38:13 crc kubenswrapper[4747]: I1126 13:38:13.843980 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.397263 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"06a6f09d-aff0-41fc-9838-b546e7aa4392","Type":"ContainerStarted","Data":"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44"} Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.397938 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"06a6f09d-aff0-41fc-9838-b546e7aa4392","Type":"ContainerStarted","Data":"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786"} Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.397957 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"06a6f09d-aff0-41fc-9838-b546e7aa4392","Type":"ContainerStarted","Data":"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793"} Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.397968 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"06a6f09d-aff0-41fc-9838-b546e7aa4392","Type":"ContainerStarted","Data":"56e89ead1a894fb8eec1965484ceb56897151840711cd43745177ecdca92aa0e"} Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.397389 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-api" containerID="cri-o://41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44" gracePeriod=30 Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.397330 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-log" containerID="cri-o://aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793" gracePeriod=30 Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.397389 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-httpd" containerID="cri-o://7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786" gracePeriod=30 Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.401489 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"aab468be-bedd-4414-a0fe-79aeec7b6bcf","Type":"ContainerStarted","Data":"f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf"} Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.401610 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"aab468be-bedd-4414-a0fe-79aeec7b6bcf","Type":"ContainerStarted","Data":"d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6"} Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.401709 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"aab468be-bedd-4414-a0fe-79aeec7b6bcf","Type":"ContainerStarted","Data":"bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201"} Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.401793 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"aab468be-bedd-4414-a0fe-79aeec7b6bcf","Type":"ContainerStarted","Data":"1f6445db5bf6865b3298092725827a8ee5458a530154ce5350ef225094b2449e"} Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.424950 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=3.424934127 podStartE2EDuration="3.424934127s" podCreationTimestamp="2025-11-26 13:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:38:14.420324413 +0000 UTC m=+1381.406635428" watchObservedRunningTime="2025-11-26 13:38:14.424934127 +0000 UTC m=+1381.411245142" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.458023 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.457992541 podStartE2EDuration="2.457992541s" podCreationTimestamp="2025-11-26 13:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:38:14.454493504 +0000 UTC m=+1381.440804539" watchObservedRunningTime="2025-11-26 13:38:14.457992541 +0000 UTC m=+1381.444303556" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.765534 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.872605 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-lib-modules\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.872658 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-httpd-run\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.872727 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-scripts\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.872753 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-var-locks-brick\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.872761 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.872783 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-run\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.872837 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-run" (OuterVolumeSpecName: "run") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.872896 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-nvme\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.872945 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-dev\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.872993 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873033 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-config-data\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873070 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-sys\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873121 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-iscsi\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873147 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-logs\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873191 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873261 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck6wg\" (UniqueName: \"kubernetes.io/projected/06a6f09d-aff0-41fc-9838-b546e7aa4392-kube-api-access-ck6wg\") pod \"06a6f09d-aff0-41fc-9838-b546e7aa4392\" (UID: \"06a6f09d-aff0-41fc-9838-b546e7aa4392\") " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873856 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873882 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873117 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873882 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873931 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-sys" (OuterVolumeSpecName: "sys") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873950 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.873967 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-dev" (OuterVolumeSpecName: "dev") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.878679 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a6f09d-aff0-41fc-9838-b546e7aa4392-kube-api-access-ck6wg" (OuterVolumeSpecName: "kube-api-access-ck6wg") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "kube-api-access-ck6wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.878710 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.878766 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.879498 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-logs" (OuterVolumeSpecName: "logs") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.880036 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-scripts" (OuterVolumeSpecName: "scripts") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.883339 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.958531 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-config-data" (OuterVolumeSpecName: "config-data") pod "06a6f09d-aff0-41fc-9838-b546e7aa4392" (UID: "06a6f09d-aff0-41fc-9838-b546e7aa4392"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975234 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck6wg\" (UniqueName: \"kubernetes.io/projected/06a6f09d-aff0-41fc-9838-b546e7aa4392-kube-api-access-ck6wg\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975268 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975280 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975292 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975303 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975313 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975337 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975354 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a6f09d-aff0-41fc-9838-b546e7aa4392-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975367 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975378 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a6f09d-aff0-41fc-9838-b546e7aa4392-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975389 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06a6f09d-aff0-41fc-9838-b546e7aa4392-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.975405 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.989805 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Nov 26 13:38:14 crc kubenswrapper[4747]: I1126 13:38:14.993072 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.076966 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.077022 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.413336 4747 generic.go:334] "Generic (PLEG): container finished" podID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerID="41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44" exitCode=143 Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.413662 4747 generic.go:334] "Generic (PLEG): container finished" podID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerID="7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786" exitCode=143 Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.413673 4747 generic.go:334] "Generic (PLEG): container finished" podID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerID="aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793" exitCode=143 Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.414219 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"06a6f09d-aff0-41fc-9838-b546e7aa4392","Type":"ContainerDied","Data":"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44"} Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.414338 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"06a6f09d-aff0-41fc-9838-b546e7aa4392","Type":"ContainerDied","Data":"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786"} Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.414368 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"06a6f09d-aff0-41fc-9838-b546e7aa4392","Type":"ContainerDied","Data":"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793"} Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.414383 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"06a6f09d-aff0-41fc-9838-b546e7aa4392","Type":"ContainerDied","Data":"56e89ead1a894fb8eec1965484ceb56897151840711cd43745177ecdca92aa0e"} Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.414403 4747 scope.go:117] "RemoveContainer" containerID="41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.414576 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.441107 4747 scope.go:117] "RemoveContainer" containerID="7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.462854 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.479809 4747 scope.go:117] "RemoveContainer" containerID="aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.505295 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.513394 4747 scope.go:117] "RemoveContainer" containerID="41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44" Nov 26 13:38:15 crc kubenswrapper[4747]: E1126 13:38:15.513870 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44\": container with ID starting with 41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44 not found: ID does not exist" containerID="41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.513901 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44"} err="failed to get container status \"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44\": rpc error: code = NotFound desc = could not find container \"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44\": container with ID starting with 41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44 not found: ID does not exist" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.513952 4747 scope.go:117] "RemoveContainer" containerID="7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786" Nov 26 13:38:15 crc kubenswrapper[4747]: E1126 13:38:15.514340 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786\": container with ID starting with 7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786 not found: ID does not exist" containerID="7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.514369 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786"} err="failed to get container status \"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786\": rpc error: code = NotFound desc = could not find container \"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786\": container with ID starting with 7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786 not found: ID does not exist" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.514383 4747 scope.go:117] "RemoveContainer" containerID="aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793" Nov 26 13:38:15 crc kubenswrapper[4747]: E1126 13:38:15.514694 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793\": container with ID starting with aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793 not found: ID does not exist" containerID="aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.514739 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793"} err="failed to get container status \"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793\": rpc error: code = NotFound desc = could not find container \"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793\": container with ID starting with aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793 not found: ID does not exist" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.514771 4747 scope.go:117] "RemoveContainer" containerID="41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.515039 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44"} err="failed to get container status \"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44\": rpc error: code = NotFound desc = could not find container \"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44\": container with ID starting with 41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44 not found: ID does not exist" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.515076 4747 scope.go:117] "RemoveContainer" containerID="7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.516253 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786"} err="failed to get container status \"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786\": rpc error: code = NotFound desc = could not find container \"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786\": container with ID starting with 7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786 not found: ID does not exist" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.516278 4747 scope.go:117] "RemoveContainer" containerID="aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.516461 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.516538 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793"} err="failed to get container status \"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793\": rpc error: code = NotFound desc = could not find container \"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793\": container with ID starting with aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793 not found: ID does not exist" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.516555 4747 scope.go:117] "RemoveContainer" containerID="41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.516812 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44"} err="failed to get container status \"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44\": rpc error: code = NotFound desc = could not find container \"41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44\": container with ID starting with 41bf31dba85a35cd7d4513e25fea4be91102af5c6c2a94d770336969ddcb9d44 not found: ID does not exist" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.516836 4747 scope.go:117] "RemoveContainer" containerID="7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.517046 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786"} err="failed to get container status \"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786\": rpc error: code = NotFound desc = could not find container \"7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786\": container with ID starting with 7f546d542a9399e4bf3801272ab87674251d6fc4d01ef6ac440c1305cce75786 not found: ID does not exist" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.517147 4747 scope.go:117] "RemoveContainer" containerID="aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793" Nov 26 13:38:15 crc kubenswrapper[4747]: E1126 13:38:15.517227 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-api" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.517241 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-api" Nov 26 13:38:15 crc kubenswrapper[4747]: E1126 13:38:15.517255 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-log" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.517262 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-log" Nov 26 13:38:15 crc kubenswrapper[4747]: E1126 13:38:15.517278 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-httpd" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.517285 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-httpd" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.517401 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793"} err="failed to get container status \"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793\": rpc error: code = NotFound desc = could not find container \"aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793\": container with ID starting with aa0df04949855deaea0d94150c173c9e89a3d7046dd9e6b19a9c86e85dc9d793 not found: ID does not exist" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.517450 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-log" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.517470 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-api" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.517519 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" containerName="glance-httpd" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.519135 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.522689 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.530305 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591314 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591362 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591382 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591436 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-dev\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591549 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591582 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591613 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-run\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591658 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-logs\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591682 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591714 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnbpd\" (UniqueName: \"kubernetes.io/projected/c588909b-4e7c-4123-bfea-cfe9e338de28-kube-api-access-gnbpd\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591749 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.591958 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.592000 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-sys\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.592024 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693096 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693155 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693181 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-run\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693211 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-logs\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693228 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693251 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnbpd\" (UniqueName: \"kubernetes.io/projected/c588909b-4e7c-4123-bfea-cfe9e338de28-kube-api-access-gnbpd\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693267 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693295 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693309 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-sys\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693326 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693359 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693387 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693402 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693421 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-dev\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693490 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-dev\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693501 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693527 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693320 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-run\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693592 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693626 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693501 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-sys\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693656 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693704 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693831 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.693913 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-logs\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.702884 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.703308 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.712005 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnbpd\" (UniqueName: \"kubernetes.io/projected/c588909b-4e7c-4123-bfea-cfe9e338de28-kube-api-access-gnbpd\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.715973 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.717032 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.808343 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a6f09d-aff0-41fc-9838-b546e7aa4392" path="/var/lib/kubelet/pods/06a6f09d-aff0-41fc-9838-b546e7aa4392/volumes" Nov 26 13:38:15 crc kubenswrapper[4747]: I1126 13:38:15.871139 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:16 crc kubenswrapper[4747]: I1126 13:38:16.281979 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:16 crc kubenswrapper[4747]: I1126 13:38:16.423932 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c588909b-4e7c-4123-bfea-cfe9e338de28","Type":"ContainerStarted","Data":"3fcc212bad313b154c5b9ca65e4c7a6eb13e9b8372c88bb5cd4b3a5dd58c502c"} Nov 26 13:38:17 crc kubenswrapper[4747]: I1126 13:38:17.433919 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c588909b-4e7c-4123-bfea-cfe9e338de28","Type":"ContainerStarted","Data":"121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de"} Nov 26 13:38:17 crc kubenswrapper[4747]: I1126 13:38:17.435426 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c588909b-4e7c-4123-bfea-cfe9e338de28","Type":"ContainerStarted","Data":"a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba"} Nov 26 13:38:17 crc kubenswrapper[4747]: I1126 13:38:17.435520 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c588909b-4e7c-4123-bfea-cfe9e338de28","Type":"ContainerStarted","Data":"331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a"} Nov 26 13:38:17 crc kubenswrapper[4747]: I1126 13:38:17.459457 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.459439554 podStartE2EDuration="2.459439554s" podCreationTimestamp="2025-11-26 13:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:38:17.457527037 +0000 UTC m=+1384.443838102" watchObservedRunningTime="2025-11-26 13:38:17.459439554 +0000 UTC m=+1384.445750569" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.267240 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.267836 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.267849 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.302149 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.302429 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.321811 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.489877 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.489934 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.489947 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.501773 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.503447 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:23 crc kubenswrapper[4747]: I1126 13:38:23.516225 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:25 crc kubenswrapper[4747]: I1126 13:38:25.871922 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:25 crc kubenswrapper[4747]: I1126 13:38:25.872345 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:25 crc kubenswrapper[4747]: I1126 13:38:25.872362 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:25 crc kubenswrapper[4747]: I1126 13:38:25.895068 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:25 crc kubenswrapper[4747]: I1126 13:38:25.898164 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:25 crc kubenswrapper[4747]: I1126 13:38:25.907751 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:26 crc kubenswrapper[4747]: I1126 13:38:26.513745 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:26 crc kubenswrapper[4747]: I1126 13:38:26.513804 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:26 crc kubenswrapper[4747]: I1126 13:38:26.513823 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:26 crc kubenswrapper[4747]: I1126 13:38:26.529580 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:26 crc kubenswrapper[4747]: I1126 13:38:26.530449 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:26 crc kubenswrapper[4747]: I1126 13:38:26.540306 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:29 crc kubenswrapper[4747]: I1126 13:38:29.895848 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:38:29 crc kubenswrapper[4747]: I1126 13:38:29.897912 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:29 crc kubenswrapper[4747]: I1126 13:38:29.906486 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:38:29 crc kubenswrapper[4747]: I1126 13:38:29.907983 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:29 crc kubenswrapper[4747]: I1126 13:38:29.916839 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:38:29 crc kubenswrapper[4747]: I1126 13:38:29.925263 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043421 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-config-data\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043512 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043550 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043584 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-logs\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043633 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-run\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043658 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043684 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-scripts\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043809 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043850 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-sys\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043872 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-logs\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043913 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-config-data\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043962 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.043986 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-dev\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044033 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-dev\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044081 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmhlf\" (UniqueName: \"kubernetes.io/projected/75c739c9-6a55-431b-9de8-5a601cdb2396-kube-api-access-zmhlf\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044181 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044219 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044256 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044277 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-sys\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044300 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25nwp\" (UniqueName: \"kubernetes.io/projected/cfff47de-64fd-46b4-9b32-930a5b93c0da-kube-api-access-25nwp\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044355 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044376 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-run\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044438 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044467 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044579 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044614 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044688 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-scripts\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.044713 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.046511 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.047940 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.053114 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.054465 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.076303 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.079334 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.145764 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.145835 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-sys\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.145858 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-logs\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.145884 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.145912 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-config-data\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.145939 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.145958 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-dev\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.145987 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146007 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-scripts\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146112 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-dev\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146138 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146161 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx6f5\" (UniqueName: \"kubernetes.io/projected/1b064525-7709-473a-9a7c-5acc84a8d2f1-kube-api-access-xx6f5\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146191 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmhlf\" (UniqueName: \"kubernetes.io/projected/75c739c9-6a55-431b-9de8-5a601cdb2396-kube-api-access-zmhlf\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146213 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146242 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-run\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146272 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146301 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146338 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146357 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-sys\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146380 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25nwp\" (UniqueName: \"kubernetes.io/projected/cfff47de-64fd-46b4-9b32-930a5b93c0da-kube-api-access-25nwp\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146400 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-config-data\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146423 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146442 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-run\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146464 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146489 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146516 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146536 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146554 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-sys\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146575 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-dev\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146599 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146620 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-logs\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146643 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146663 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146696 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-scripts\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146716 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146735 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146761 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-config-data\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146786 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8qhv\" (UniqueName: \"kubernetes.io/projected/3e44d518-6b6c-489a-adcc-6fed567dcb88-kube-api-access-s8qhv\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146807 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146833 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146857 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-sys\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146878 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-scripts\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146904 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-config-data\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146941 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.146977 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147017 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147077 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147106 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147223 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-logs\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147251 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147274 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-run\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147296 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147321 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-scripts\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147348 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-logs\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147375 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-dev\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147396 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-run\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147550 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.147616 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-sys\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148069 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148260 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-logs\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148280 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-run\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148389 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-logs\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148407 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148391 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148432 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148436 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148446 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148472 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-run\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148517 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-sys\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148535 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148551 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-dev\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148552 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148610 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-dev\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148630 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148635 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") device mount path \"/mnt/openstack/pv04\"" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.148871 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.150661 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.152751 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.153917 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-scripts\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.154329 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-config-data\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.155332 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-scripts\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.163799 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-config-data\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.164304 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmhlf\" (UniqueName: \"kubernetes.io/projected/75c739c9-6a55-431b-9de8-5a601cdb2396-kube-api-access-zmhlf\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.170166 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.173383 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.177335 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.177410 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25nwp\" (UniqueName: \"kubernetes.io/projected/cfff47de-64fd-46b4-9b32-930a5b93c0da-kube-api-access-25nwp\") pod \"glance-default-external-api-2\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.180599 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.218924 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.234016 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249018 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-logs\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249164 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-dev\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249204 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-run\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249246 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249293 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249327 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-scripts\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249340 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-dev\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249411 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-run\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249425 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249360 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249470 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx6f5\" (UniqueName: \"kubernetes.io/projected/1b064525-7709-473a-9a7c-5acc84a8d2f1-kube-api-access-xx6f5\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249494 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249510 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249533 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249558 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-run\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249598 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-config-data\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249654 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249686 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249696 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-run\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249718 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-sys\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249750 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-dev\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249752 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249783 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-logs\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249873 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249930 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249973 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8qhv\" (UniqueName: \"kubernetes.io/projected/3e44d518-6b6c-489a-adcc-6fed567dcb88-kube-api-access-s8qhv\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250007 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250043 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250099 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-sys\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250135 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-scripts\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250168 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-config-data\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250222 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.249654 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-logs\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250253 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250380 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250407 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250485 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250664 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-logs\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250692 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250664 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-sys\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250728 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-dev\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250766 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250769 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.250862 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.251048 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.251117 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.251301 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") device mount path \"/mnt/openstack/pv07\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.251781 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.251787 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.251135 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-sys\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.254998 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-scripts\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.255653 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-config-data\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.257338 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-scripts\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.272128 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8qhv\" (UniqueName: \"kubernetes.io/projected/3e44d518-6b6c-489a-adcc-6fed567dcb88-kube-api-access-s8qhv\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.274459 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.274607 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.275381 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-config-data\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.276899 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx6f5\" (UniqueName: \"kubernetes.io/projected/1b064525-7709-473a-9a7c-5acc84a8d2f1-kube-api-access-xx6f5\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.277220 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.297773 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-2\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.378972 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.390075 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.651272 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:38:30 crc kubenswrapper[4747]: W1126 13:38:30.658556 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e44d518_6b6c_489a_adcc_6fed567dcb88.slice/crio-dcd51106881e34284735a7d6bef0f55d48137939b2583a94e661868ffba34c45 WatchSource:0}: Error finding container dcd51106881e34284735a7d6bef0f55d48137939b2583a94e661868ffba34c45: Status 404 returned error can't find the container with id dcd51106881e34284735a7d6bef0f55d48137939b2583a94e661868ffba34c45 Nov 26 13:38:30 crc kubenswrapper[4747]: I1126 13:38:30.658656 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:38:30 crc kubenswrapper[4747]: W1126 13:38:30.662414 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b064525_7709_473a_9a7c_5acc84a8d2f1.slice/crio-98fc6cfc707f0b7717476a7264801e4f03ba0238b19deec93426acdcd49fa11d WatchSource:0}: Error finding container 98fc6cfc707f0b7717476a7264801e4f03ba0238b19deec93426acdcd49fa11d: Status 404 returned error can't find the container with id 98fc6cfc707f0b7717476a7264801e4f03ba0238b19deec93426acdcd49fa11d Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:30.690489 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:38:31 crc kubenswrapper[4747]: W1126 13:38:30.696463 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfff47de_64fd_46b4_9b32_930a5b93c0da.slice/crio-75d49cb3b5474d37f97451a3a8629257403e39c034a10e09fc0992da7c186b97 WatchSource:0}: Error finding container 75d49cb3b5474d37f97451a3a8629257403e39c034a10e09fc0992da7c186b97: Status 404 returned error can't find the container with id 75d49cb3b5474d37f97451a3a8629257403e39c034a10e09fc0992da7c186b97 Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:30.738184 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:38:31 crc kubenswrapper[4747]: W1126 13:38:30.742385 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75c739c9_6a55_431b_9de8_5a601cdb2396.slice/crio-20c6719183f3ccacc35225ad23350f3b7dff14edc6ccfd202f25f83f303c0966 WatchSource:0}: Error finding container 20c6719183f3ccacc35225ad23350f3b7dff14edc6ccfd202f25f83f303c0966: Status 404 returned error can't find the container with id 20c6719183f3ccacc35225ad23350f3b7dff14edc6ccfd202f25f83f303c0966 Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.548435 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"cfff47de-64fd-46b4-9b32-930a5b93c0da","Type":"ContainerStarted","Data":"972d1ecf0f6bba4f0f0ec4fa4bed79b5b86177a459f2b377f42b5c16edca4744"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.548958 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"cfff47de-64fd-46b4-9b32-930a5b93c0da","Type":"ContainerStarted","Data":"0a5bd39bf7c71c2afb995373c90bc27f084418c3a5d46b5d7d4fea3c06ebe919"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.548975 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"cfff47de-64fd-46b4-9b32-930a5b93c0da","Type":"ContainerStarted","Data":"7ad3e18dc1d45c1edcecca836b97a1550abb8ad0fcccc1ecbdb1f1e93a0f51c1"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.548984 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"cfff47de-64fd-46b4-9b32-930a5b93c0da","Type":"ContainerStarted","Data":"75d49cb3b5474d37f97451a3a8629257403e39c034a10e09fc0992da7c186b97"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.550481 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"3e44d518-6b6c-489a-adcc-6fed567dcb88","Type":"ContainerStarted","Data":"d4c9cec07d4d2f8c16121909e9891d356481da5081e3dfe9ffb40fcfc9836211"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.550523 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"3e44d518-6b6c-489a-adcc-6fed567dcb88","Type":"ContainerStarted","Data":"0263ac2686746b0ed7a51dcb003beeb87427d58af1e559292879dff075e982d3"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.550534 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"3e44d518-6b6c-489a-adcc-6fed567dcb88","Type":"ContainerStarted","Data":"1844ca345bcc1fe9dff75bbe1a6ebe569de0c50eec3fd1b526bb181c20c21f07"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.550543 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"3e44d518-6b6c-489a-adcc-6fed567dcb88","Type":"ContainerStarted","Data":"dcd51106881e34284735a7d6bef0f55d48137939b2583a94e661868ffba34c45"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.555825 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"1b064525-7709-473a-9a7c-5acc84a8d2f1","Type":"ContainerStarted","Data":"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.555915 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"1b064525-7709-473a-9a7c-5acc84a8d2f1","Type":"ContainerStarted","Data":"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.555926 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"1b064525-7709-473a-9a7c-5acc84a8d2f1","Type":"ContainerStarted","Data":"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.555936 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"1b064525-7709-473a-9a7c-5acc84a8d2f1","Type":"ContainerStarted","Data":"98fc6cfc707f0b7717476a7264801e4f03ba0238b19deec93426acdcd49fa11d"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.558244 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"75c739c9-6a55-431b-9de8-5a601cdb2396","Type":"ContainerStarted","Data":"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.558284 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"75c739c9-6a55-431b-9de8-5a601cdb2396","Type":"ContainerStarted","Data":"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.558296 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"75c739c9-6a55-431b-9de8-5a601cdb2396","Type":"ContainerStarted","Data":"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.558307 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"75c739c9-6a55-431b-9de8-5a601cdb2396","Type":"ContainerStarted","Data":"20c6719183f3ccacc35225ad23350f3b7dff14edc6ccfd202f25f83f303c0966"} Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.590761 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-2" podStartSLOduration=3.590744435 podStartE2EDuration="3.590744435s" podCreationTimestamp="2025-11-26 13:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:38:31.570962922 +0000 UTC m=+1398.557273937" watchObservedRunningTime="2025-11-26 13:38:31.590744435 +0000 UTC m=+1398.577055450" Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.595737 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=3.595718509 podStartE2EDuration="3.595718509s" podCreationTimestamp="2025-11-26 13:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:38:31.594124539 +0000 UTC m=+1398.580435544" watchObservedRunningTime="2025-11-26 13:38:31.595718509 +0000 UTC m=+1398.582029524" Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.623878 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=3.623831889 podStartE2EDuration="3.623831889s" podCreationTimestamp="2025-11-26 13:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:38:31.619579343 +0000 UTC m=+1398.605890368" watchObservedRunningTime="2025-11-26 13:38:31.623831889 +0000 UTC m=+1398.610142904" Nov 26 13:38:31 crc kubenswrapper[4747]: I1126 13:38:31.652327 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-2" podStartSLOduration=3.652310988 podStartE2EDuration="3.652310988s" podCreationTimestamp="2025-11-26 13:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:38:31.642149715 +0000 UTC m=+1398.628460750" watchObservedRunningTime="2025-11-26 13:38:31.652310988 +0000 UTC m=+1398.638622003" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.219867 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.220480 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.220495 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.235861 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.235913 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.235924 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.254683 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.255354 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.256944 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.259673 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.270499 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.282187 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.379195 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.379663 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.379700 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.390304 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.390342 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.390352 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.404423 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.407703 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.416526 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.418227 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.418331 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.425131 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634215 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634269 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634287 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634300 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634314 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634331 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634345 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634359 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634374 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634387 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634401 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.634417 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.646597 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.646684 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.647723 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.647984 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.648131 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.649429 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.650745 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.651183 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.652944 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.653454 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.654795 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:40 crc kubenswrapper[4747]: I1126 13:38:40.657170 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:41 crc kubenswrapper[4747]: I1126 13:38:41.851422 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:38:41 crc kubenswrapper[4747]: I1126 13:38:41.867026 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:38:42 crc kubenswrapper[4747]: I1126 13:38:42.148949 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:38:42 crc kubenswrapper[4747]: I1126 13:38:42.156031 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.655147 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-log" containerID="cri-o://1844ca345bcc1fe9dff75bbe1a6ebe569de0c50eec3fd1b526bb181c20c21f07" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.655298 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-api" containerID="cri-o://d4c9cec07d4d2f8c16121909e9891d356481da5081e3dfe9ffb40fcfc9836211" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.655336 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-httpd" containerID="cri-o://0263ac2686746b0ed7a51dcb003beeb87427d58af1e559292879dff075e982d3" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.656094 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-httpd" containerID="cri-o://0a5bd39bf7c71c2afb995373c90bc27f084418c3a5d46b5d7d4fea3c06ebe919" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.656110 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-log" containerID="cri-o://7ad3e18dc1d45c1edcecca836b97a1550abb8ad0fcccc1ecbdb1f1e93a0f51c1" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.656399 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-log" containerID="cri-o://41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.656469 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-api" containerID="cri-o://87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.656505 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-httpd" containerID="cri-o://d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.656576 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-api" containerID="cri-o://a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.656599 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-httpd" containerID="cri-o://a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.656522 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-log" containerID="cri-o://fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459" gracePeriod=30 Nov 26 13:38:43 crc kubenswrapper[4747]: I1126 13:38:43.658933 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-api" containerID="cri-o://972d1ecf0f6bba4f0f0ec4fa4bed79b5b86177a459f2b377f42b5c16edca4744" gracePeriod=30 Nov 26 13:38:44 crc kubenswrapper[4747]: E1126 13:38:44.029421 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b064525_7709_473a_9a7c_5acc84a8d2f1.slice/crio-conmon-d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e44d518_6b6c_489a_adcc_6fed567dcb88.slice/crio-1844ca345bcc1fe9dff75bbe1a6ebe569de0c50eec3fd1b526bb181c20c21f07.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfff47de_64fd_46b4_9b32_930a5b93c0da.slice/crio-conmon-0a5bd39bf7c71c2afb995373c90bc27f084418c3a5d46b5d7d4fea3c06ebe919.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e44d518_6b6c_489a_adcc_6fed567dcb88.slice/crio-conmon-0263ac2686746b0ed7a51dcb003beeb87427d58af1e559292879dff075e982d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfff47de_64fd_46b4_9b32_930a5b93c0da.slice/crio-7ad3e18dc1d45c1edcecca836b97a1550abb8ad0fcccc1ecbdb1f1e93a0f51c1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75c739c9_6a55_431b_9de8_5a601cdb2396.slice/crio-fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b064525_7709_473a_9a7c_5acc84a8d2f1.slice/crio-41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167.scope\": RecentStats: unable to find data in memory cache]" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.537492 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609506 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-config-data\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609594 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-lib-modules\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609641 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-run\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609696 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-httpd-run\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609733 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-sys\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609751 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-dev\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609790 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-scripts\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609822 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-var-locks-brick\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609869 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-nvme\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609893 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609933 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609960 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-logs\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.609982 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx6f5\" (UniqueName: \"kubernetes.io/projected/1b064525-7709-473a-9a7c-5acc84a8d2f1-kube-api-access-xx6f5\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.610003 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-iscsi\") pod \"1b064525-7709-473a-9a7c-5acc84a8d2f1\" (UID: \"1b064525-7709-473a-9a7c-5acc84a8d2f1\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.610126 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.610184 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-run" (OuterVolumeSpecName: "run") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.610206 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.610209 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.610270 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-dev" (OuterVolumeSpecName: "dev") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.610290 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-sys" (OuterVolumeSpecName: "sys") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.610615 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-logs" (OuterVolumeSpecName: "logs") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.610642 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.611234 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.611259 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.611272 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.611284 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b064525-7709-473a-9a7c-5acc84a8d2f1-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.611295 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.611306 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.611317 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.611330 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.611364 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.618138 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-scripts" (OuterVolumeSpecName: "scripts") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.619156 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b064525-7709-473a-9a7c-5acc84a8d2f1-kube-api-access-xx6f5" (OuterVolumeSpecName: "kube-api-access-xx6f5") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "kube-api-access-xx6f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.622259 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.624297 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.660414 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.666854 4747 generic.go:334] "Generic (PLEG): container finished" podID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerID="a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca" exitCode=0 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.666881 4747 generic.go:334] "Generic (PLEG): container finished" podID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerID="a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b" exitCode=0 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.666892 4747 generic.go:334] "Generic (PLEG): container finished" podID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerID="fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459" exitCode=143 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.666933 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"75c739c9-6a55-431b-9de8-5a601cdb2396","Type":"ContainerDied","Data":"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.666966 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"75c739c9-6a55-431b-9de8-5a601cdb2396","Type":"ContainerDied","Data":"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.666979 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"75c739c9-6a55-431b-9de8-5a601cdb2396","Type":"ContainerDied","Data":"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.666990 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"75c739c9-6a55-431b-9de8-5a601cdb2396","Type":"ContainerDied","Data":"20c6719183f3ccacc35225ad23350f3b7dff14edc6ccfd202f25f83f303c0966"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.667008 4747 scope.go:117] "RemoveContainer" containerID="a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.667145 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.671603 4747 generic.go:334] "Generic (PLEG): container finished" podID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerID="972d1ecf0f6bba4f0f0ec4fa4bed79b5b86177a459f2b377f42b5c16edca4744" exitCode=0 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.671632 4747 generic.go:334] "Generic (PLEG): container finished" podID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerID="0a5bd39bf7c71c2afb995373c90bc27f084418c3a5d46b5d7d4fea3c06ebe919" exitCode=0 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.671641 4747 generic.go:334] "Generic (PLEG): container finished" podID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerID="7ad3e18dc1d45c1edcecca836b97a1550abb8ad0fcccc1ecbdb1f1e93a0f51c1" exitCode=143 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.671688 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"cfff47de-64fd-46b4-9b32-930a5b93c0da","Type":"ContainerDied","Data":"972d1ecf0f6bba4f0f0ec4fa4bed79b5b86177a459f2b377f42b5c16edca4744"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.671712 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"cfff47de-64fd-46b4-9b32-930a5b93c0da","Type":"ContainerDied","Data":"0a5bd39bf7c71c2afb995373c90bc27f084418c3a5d46b5d7d4fea3c06ebe919"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.671722 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"cfff47de-64fd-46b4-9b32-930a5b93c0da","Type":"ContainerDied","Data":"7ad3e18dc1d45c1edcecca836b97a1550abb8ad0fcccc1ecbdb1f1e93a0f51c1"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.672563 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.677884 4747 generic.go:334] "Generic (PLEG): container finished" podID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerID="d4c9cec07d4d2f8c16121909e9891d356481da5081e3dfe9ffb40fcfc9836211" exitCode=0 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.677917 4747 generic.go:334] "Generic (PLEG): container finished" podID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerID="0263ac2686746b0ed7a51dcb003beeb87427d58af1e559292879dff075e982d3" exitCode=0 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.677927 4747 generic.go:334] "Generic (PLEG): container finished" podID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerID="1844ca345bcc1fe9dff75bbe1a6ebe569de0c50eec3fd1b526bb181c20c21f07" exitCode=143 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.677970 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"3e44d518-6b6c-489a-adcc-6fed567dcb88","Type":"ContainerDied","Data":"d4c9cec07d4d2f8c16121909e9891d356481da5081e3dfe9ffb40fcfc9836211"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.677995 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"3e44d518-6b6c-489a-adcc-6fed567dcb88","Type":"ContainerDied","Data":"0263ac2686746b0ed7a51dcb003beeb87427d58af1e559292879dff075e982d3"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.678008 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"3e44d518-6b6c-489a-adcc-6fed567dcb88","Type":"ContainerDied","Data":"1844ca345bcc1fe9dff75bbe1a6ebe569de0c50eec3fd1b526bb181c20c21f07"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.680885 4747 generic.go:334] "Generic (PLEG): container finished" podID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerID="87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31" exitCode=0 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.680977 4747 generic.go:334] "Generic (PLEG): container finished" podID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerID="d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48" exitCode=0 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.681031 4747 generic.go:334] "Generic (PLEG): container finished" podID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerID="41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167" exitCode=143 Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.681205 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"1b064525-7709-473a-9a7c-5acc84a8d2f1","Type":"ContainerDied","Data":"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.681287 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"1b064525-7709-473a-9a7c-5acc84a8d2f1","Type":"ContainerDied","Data":"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.681356 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"1b064525-7709-473a-9a7c-5acc84a8d2f1","Type":"ContainerDied","Data":"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.681408 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"1b064525-7709-473a-9a7c-5acc84a8d2f1","Type":"ContainerDied","Data":"98fc6cfc707f0b7717476a7264801e4f03ba0238b19deec93426acdcd49fa11d"} Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.681542 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712167 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-scripts\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712233 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-sys\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712273 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712303 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-var-locks-brick\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712337 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-scripts\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712372 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-logs\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712405 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25nwp\" (UniqueName: \"kubernetes.io/projected/cfff47de-64fd-46b4-9b32-930a5b93c0da-kube-api-access-25nwp\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712432 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-run\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712450 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-iscsi\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712490 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-httpd-run\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712518 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-run\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712556 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-config-data\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712577 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-config-data\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712612 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712615 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712646 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-logs\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712699 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-nvme\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712726 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-lib-modules\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712747 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712773 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712794 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-nvme\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712819 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-sys\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712840 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-httpd-run\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712862 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmhlf\" (UniqueName: \"kubernetes.io/projected/75c739c9-6a55-431b-9de8-5a601cdb2396-kube-api-access-zmhlf\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712884 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-var-locks-brick\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712914 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-dev\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712943 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-iscsi\") pod \"75c739c9-6a55-431b-9de8-5a601cdb2396\" (UID: \"75c739c9-6a55-431b-9de8-5a601cdb2396\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.712967 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-lib-modules\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713000 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-dev\") pod \"cfff47de-64fd-46b4-9b32-930a5b93c0da\" (UID: \"cfff47de-64fd-46b4-9b32-930a5b93c0da\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713124 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-logs" (OuterVolumeSpecName: "logs") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713163 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-sys" (OuterVolumeSpecName: "sys") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713346 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713364 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713378 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713389 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713411 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713427 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713439 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx6f5\" (UniqueName: \"kubernetes.io/projected/1b064525-7709-473a-9a7c-5acc84a8d2f1-kube-api-access-xx6f5\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.713450 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1b064525-7709-473a-9a7c-5acc84a8d2f1-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.714733 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.715349 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-run" (OuterVolumeSpecName: "run") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.715580 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-scripts" (OuterVolumeSpecName: "scripts") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.715636 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.715991 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-logs" (OuterVolumeSpecName: "logs") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.716030 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.716078 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.716104 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.716132 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-sys" (OuterVolumeSpecName: "sys") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.716478 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfff47de-64fd-46b4-9b32-930a5b93c0da-kube-api-access-25nwp" (OuterVolumeSpecName: "kube-api-access-25nwp") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "kube-api-access-25nwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.716868 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.716899 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-dev" (OuterVolumeSpecName: "dev") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.716923 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.716953 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-run" (OuterVolumeSpecName: "run") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.717284 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.717469 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-dev" (OuterVolumeSpecName: "dev") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.717814 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.718448 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.719706 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.720042 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.720078 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance-cache") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.744725 4747 scope.go:117] "RemoveContainer" containerID="a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.747350 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.753399 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-scripts" (OuterVolumeSpecName: "scripts") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.753799 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c739c9-6a55-431b-9de8-5a601cdb2396-kube-api-access-zmhlf" (OuterVolumeSpecName: "kube-api-access-zmhlf") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "kube-api-access-zmhlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.757936 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-config-data" (OuterVolumeSpecName: "config-data") pod "1b064525-7709-473a-9a7c-5acc84a8d2f1" (UID: "1b064525-7709-473a-9a7c-5acc84a8d2f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.781914 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.784570 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.810723 4747 scope.go:117] "RemoveContainer" containerID="fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815006 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-logs\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815109 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-run\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815134 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-dev\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815166 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815184 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-httpd-run\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815235 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8qhv\" (UniqueName: \"kubernetes.io/projected/3e44d518-6b6c-489a-adcc-6fed567dcb88-kube-api-access-s8qhv\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815291 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815312 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-sys\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815331 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-scripts\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815349 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-var-locks-brick\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815385 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-iscsi\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815402 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-nvme\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815457 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-lib-modules\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815502 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-config-data\") pod \"3e44d518-6b6c-489a-adcc-6fed567dcb88\" (UID: \"3e44d518-6b6c-489a-adcc-6fed567dcb88\") " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815953 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815982 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.815996 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816010 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816021 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25nwp\" (UniqueName: \"kubernetes.io/projected/cfff47de-64fd-46b4-9b32-930a5b93c0da-kube-api-access-25nwp\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816030 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816037 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75c739c9-6a55-431b-9de8-5a601cdb2396-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816046 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816072 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816090 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816102 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816113 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816123 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816130 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816143 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816155 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816165 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816172 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816180 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfff47de-64fd-46b4-9b32-930a5b93c0da-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816188 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmhlf\" (UniqueName: \"kubernetes.io/projected/75c739c9-6a55-431b-9de8-5a601cdb2396-kube-api-access-zmhlf\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816196 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816204 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816211 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b064525-7709-473a-9a7c-5acc84a8d2f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816220 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/75c739c9-6a55-431b-9de8-5a601cdb2396-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816228 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.816236 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cfff47de-64fd-46b4-9b32-930a5b93c0da-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.817464 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.818211 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.818976 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-logs" (OuterVolumeSpecName: "logs") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.819029 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-run" (OuterVolumeSpecName: "run") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.819130 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-sys" (OuterVolumeSpecName: "sys") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.821979 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-dev" (OuterVolumeSpecName: "dev") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.822283 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.822412 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.822448 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.839035 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.843278 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance-cache") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.850492 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e44d518-6b6c-489a-adcc-6fed567dcb88-kube-api-access-s8qhv" (OuterVolumeSpecName: "kube-api-access-s8qhv") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "kube-api-access-s8qhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.850495 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.850481 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-scripts" (OuterVolumeSpecName: "scripts") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.861248 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.873513 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-config-data" (OuterVolumeSpecName: "config-data") pod "75c739c9-6a55-431b-9de8-5a601cdb2396" (UID: "75c739c9-6a55-431b-9de8-5a601cdb2396"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.878359 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.879328 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.904749 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-config-data" (OuterVolumeSpecName: "config-data") pod "cfff47de-64fd-46b4-9b32-930a5b93c0da" (UID: "cfff47de-64fd-46b4-9b32-930a5b93c0da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917460 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917485 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917494 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8qhv\" (UniqueName: \"kubernetes.io/projected/3e44d518-6b6c-489a-adcc-6fed567dcb88-kube-api-access-s8qhv\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917505 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917519 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917527 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917538 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917546 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917554 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917561 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917568 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917577 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c739c9-6a55-431b-9de8-5a601cdb2396-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917585 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfff47de-64fd-46b4-9b32-930a5b93c0da-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917593 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917600 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917608 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917615 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e44d518-6b6c-489a-adcc-6fed567dcb88-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917622 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.917629 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e44d518-6b6c-489a-adcc-6fed567dcb88-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.931918 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.934349 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.938635 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-config-data" (OuterVolumeSpecName: "config-data") pod "3e44d518-6b6c-489a-adcc-6fed567dcb88" (UID: "3e44d518-6b6c-489a-adcc-6fed567dcb88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.968408 4747 scope.go:117] "RemoveContainer" containerID="a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca" Nov 26 13:38:44 crc kubenswrapper[4747]: E1126 13:38:44.969311 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca\": container with ID starting with a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca not found: ID does not exist" containerID="a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.969346 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca"} err="failed to get container status \"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca\": rpc error: code = NotFound desc = could not find container \"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca\": container with ID starting with a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca not found: ID does not exist" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.969370 4747 scope.go:117] "RemoveContainer" containerID="a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b" Nov 26 13:38:44 crc kubenswrapper[4747]: E1126 13:38:44.969709 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b\": container with ID starting with a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b not found: ID does not exist" containerID="a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.969752 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b"} err="failed to get container status \"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b\": rpc error: code = NotFound desc = could not find container \"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b\": container with ID starting with a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b not found: ID does not exist" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.969790 4747 scope.go:117] "RemoveContainer" containerID="fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459" Nov 26 13:38:44 crc kubenswrapper[4747]: E1126 13:38:44.970127 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459\": container with ID starting with fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459 not found: ID does not exist" containerID="fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.970179 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459"} err="failed to get container status \"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459\": rpc error: code = NotFound desc = could not find container \"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459\": container with ID starting with fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459 not found: ID does not exist" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.970219 4747 scope.go:117] "RemoveContainer" containerID="a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.970493 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca"} err="failed to get container status \"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca\": rpc error: code = NotFound desc = could not find container \"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca\": container with ID starting with a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca not found: ID does not exist" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.970514 4747 scope.go:117] "RemoveContainer" containerID="a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.970824 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b"} err="failed to get container status \"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b\": rpc error: code = NotFound desc = could not find container \"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b\": container with ID starting with a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b not found: ID does not exist" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.970855 4747 scope.go:117] "RemoveContainer" containerID="fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.971091 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459"} err="failed to get container status \"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459\": rpc error: code = NotFound desc = could not find container \"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459\": container with ID starting with fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459 not found: ID does not exist" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.971130 4747 scope.go:117] "RemoveContainer" containerID="a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.972324 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca"} err="failed to get container status \"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca\": rpc error: code = NotFound desc = could not find container \"a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca\": container with ID starting with a3cf60b7be923f464d6cc278eae6602d473d59c0968d018c558be72b567bcfca not found: ID does not exist" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.972343 4747 scope.go:117] "RemoveContainer" containerID="a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.972620 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b"} err="failed to get container status \"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b\": rpc error: code = NotFound desc = could not find container \"a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b\": container with ID starting with a2c44f3342d1ccd1572c265d946d85c8b4f03900d529e0c67c35ebdb8700f39b not found: ID does not exist" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.972635 4747 scope.go:117] "RemoveContainer" containerID="fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.972951 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459"} err="failed to get container status \"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459\": rpc error: code = NotFound desc = could not find container \"fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459\": container with ID starting with fe0c61d3d4519c0409dc3c0ad0cb68ba78a1e207a39a88f532cab71b640d8459 not found: ID does not exist" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.972966 4747 scope.go:117] "RemoveContainer" containerID="87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31" Nov 26 13:38:44 crc kubenswrapper[4747]: I1126 13:38:44.993306 4747 scope.go:117] "RemoveContainer" containerID="d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.002030 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.007686 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.017859 4747 scope.go:117] "RemoveContainer" containerID="41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.018697 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.018713 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.018708 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.018721 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e44d518-6b6c-489a-adcc-6fed567dcb88-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.024514 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.043849 4747 scope.go:117] "RemoveContainer" containerID="87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31" Nov 26 13:38:45 crc kubenswrapper[4747]: E1126 13:38:45.044267 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31\": container with ID starting with 87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31 not found: ID does not exist" containerID="87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.044306 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31"} err="failed to get container status \"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31\": rpc error: code = NotFound desc = could not find container \"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31\": container with ID starting with 87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31 not found: ID does not exist" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.044331 4747 scope.go:117] "RemoveContainer" containerID="d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48" Nov 26 13:38:45 crc kubenswrapper[4747]: E1126 13:38:45.044576 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48\": container with ID starting with d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48 not found: ID does not exist" containerID="d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.044605 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48"} err="failed to get container status \"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48\": rpc error: code = NotFound desc = could not find container \"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48\": container with ID starting with d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48 not found: ID does not exist" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.044626 4747 scope.go:117] "RemoveContainer" containerID="41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167" Nov 26 13:38:45 crc kubenswrapper[4747]: E1126 13:38:45.044822 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167\": container with ID starting with 41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167 not found: ID does not exist" containerID="41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.044839 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167"} err="failed to get container status \"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167\": rpc error: code = NotFound desc = could not find container \"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167\": container with ID starting with 41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167 not found: ID does not exist" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.044852 4747 scope.go:117] "RemoveContainer" containerID="87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.045171 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31"} err="failed to get container status \"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31\": rpc error: code = NotFound desc = could not find container \"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31\": container with ID starting with 87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31 not found: ID does not exist" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.045185 4747 scope.go:117] "RemoveContainer" containerID="d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.045366 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48"} err="failed to get container status \"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48\": rpc error: code = NotFound desc = could not find container \"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48\": container with ID starting with d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48 not found: ID does not exist" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.045388 4747 scope.go:117] "RemoveContainer" containerID="41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.045545 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167"} err="failed to get container status \"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167\": rpc error: code = NotFound desc = could not find container \"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167\": container with ID starting with 41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167 not found: ID does not exist" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.045562 4747 scope.go:117] "RemoveContainer" containerID="87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.045738 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31"} err="failed to get container status \"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31\": rpc error: code = NotFound desc = could not find container \"87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31\": container with ID starting with 87245aca12b99f558c089bf4c2760e5500a41ab9ac0e8fb851c16f31c7324a31 not found: ID does not exist" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.045756 4747 scope.go:117] "RemoveContainer" containerID="d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.045939 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48"} err="failed to get container status \"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48\": rpc error: code = NotFound desc = could not find container \"d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48\": container with ID starting with d7ea8a7dd47cf7df3d168fa3a71cd1875d2f53e6ff1c9ea66fff44c53eba4a48 not found: ID does not exist" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.045961 4747 scope.go:117] "RemoveContainer" containerID="41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.046163 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167"} err="failed to get container status \"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167\": rpc error: code = NotFound desc = could not find container \"41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167\": container with ID starting with 41490a2fb2ab1a63c870ecc98f385f5dc0a40703701558d72e99e52bc6ef8167 not found: ID does not exist" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.692937 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"cfff47de-64fd-46b4-9b32-930a5b93c0da","Type":"ContainerDied","Data":"75d49cb3b5474d37f97451a3a8629257403e39c034a10e09fc0992da7c186b97"} Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.692987 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.693025 4747 scope.go:117] "RemoveContainer" containerID="972d1ecf0f6bba4f0f0ec4fa4bed79b5b86177a459f2b377f42b5c16edca4744" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.696297 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"3e44d518-6b6c-489a-adcc-6fed567dcb88","Type":"ContainerDied","Data":"dcd51106881e34284735a7d6bef0f55d48137939b2583a94e661868ffba34c45"} Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.696531 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.716125 4747 scope.go:117] "RemoveContainer" containerID="0a5bd39bf7c71c2afb995373c90bc27f084418c3a5d46b5d7d4fea3c06ebe919" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.759429 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.778392 4747 scope.go:117] "RemoveContainer" containerID="7ad3e18dc1d45c1edcecca836b97a1550abb8ad0fcccc1ecbdb1f1e93a0f51c1" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.780456 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.791974 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.807921 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" path="/var/lib/kubelet/pods/1b064525-7709-473a-9a7c-5acc84a8d2f1/volumes" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.808985 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" path="/var/lib/kubelet/pods/75c739c9-6a55-431b-9de8-5a601cdb2396/volumes" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.810697 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" path="/var/lib/kubelet/pods/cfff47de-64fd-46b4-9b32-930a5b93c0da/volumes" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.811569 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.823428 4747 scope.go:117] "RemoveContainer" containerID="d4c9cec07d4d2f8c16121909e9891d356481da5081e3dfe9ffb40fcfc9836211" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.853505 4747 scope.go:117] "RemoveContainer" containerID="0263ac2686746b0ed7a51dcb003beeb87427d58af1e559292879dff075e982d3" Nov 26 13:38:45 crc kubenswrapper[4747]: I1126 13:38:45.872926 4747 scope.go:117] "RemoveContainer" containerID="1844ca345bcc1fe9dff75bbe1a6ebe569de0c50eec3fd1b526bb181c20c21f07" Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.195958 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.196656 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-log" containerID="cri-o://bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201" gracePeriod=30 Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.196753 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-httpd" containerID="cri-o://d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6" gracePeriod=30 Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.196741 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-api" containerID="cri-o://f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf" gracePeriod=30 Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.454929 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.455354 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-log" containerID="cri-o://331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a" gracePeriod=30 Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.455487 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-api" containerID="cri-o://121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de" gracePeriod=30 Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.455522 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-httpd" containerID="cri-o://a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba" gracePeriod=30 Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.721648 4747 generic.go:334] "Generic (PLEG): container finished" podID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerID="a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba" exitCode=0 Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.721694 4747 generic.go:334] "Generic (PLEG): container finished" podID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerID="331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a" exitCode=143 Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.721731 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c588909b-4e7c-4123-bfea-cfe9e338de28","Type":"ContainerDied","Data":"a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba"} Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.721791 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c588909b-4e7c-4123-bfea-cfe9e338de28","Type":"ContainerDied","Data":"331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a"} Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.724690 4747 generic.go:334] "Generic (PLEG): container finished" podID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerID="d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6" exitCode=0 Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.724729 4747 generic.go:334] "Generic (PLEG): container finished" podID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerID="bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201" exitCode=143 Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.724753 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"aab468be-bedd-4414-a0fe-79aeec7b6bcf","Type":"ContainerDied","Data":"d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6"} Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.724783 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"aab468be-bedd-4414-a0fe-79aeec7b6bcf","Type":"ContainerDied","Data":"bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201"} Nov 26 13:38:47 crc kubenswrapper[4747]: I1126 13:38:47.823684 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" path="/var/lib/kubelet/pods/3e44d518-6b6c-489a-adcc-6fed567dcb88/volumes" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.047295 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.178903 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-run\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.178981 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-scripts\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179006 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-iscsi\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179023 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x4jj\" (UniqueName: \"kubernetes.io/projected/aab468be-bedd-4414-a0fe-79aeec7b6bcf-kube-api-access-2x4jj\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179047 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-logs\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179088 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179108 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-var-locks-brick\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179127 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-config-data\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179155 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-nvme\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179196 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-lib-modules\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179202 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-run" (OuterVolumeSpecName: "run") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179202 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179217 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-httpd-run\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179287 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179317 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-dev\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179343 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-sys\") pod \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\" (UID: \"aab468be-bedd-4414-a0fe-79aeec7b6bcf\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179616 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179744 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-logs" (OuterVolumeSpecName: "logs") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.179945 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.180147 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.180172 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.180185 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.180216 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aab468be-bedd-4414-a0fe-79aeec7b6bcf-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.180230 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.180269 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-dev" (OuterVolumeSpecName: "dev") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.180317 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.180349 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.180398 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-sys" (OuterVolumeSpecName: "sys") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.184852 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-scripts" (OuterVolumeSpecName: "scripts") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.184921 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab468be-bedd-4414-a0fe-79aeec7b6bcf-kube-api-access-2x4jj" (OuterVolumeSpecName: "kube-api-access-2x4jj") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "kube-api-access-2x4jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.185395 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.186160 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance-cache") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.203491 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.277211 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-config-data" (OuterVolumeSpecName: "config-data") pod "aab468be-bedd-4414-a0fe-79aeec7b6bcf" (UID: "aab468be-bedd-4414-a0fe-79aeec7b6bcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.282931 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.282970 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x4jj\" (UniqueName: \"kubernetes.io/projected/aab468be-bedd-4414-a0fe-79aeec7b6bcf-kube-api-access-2x4jj\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.282999 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.283012 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab468be-bedd-4414-a0fe-79aeec7b6bcf-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.283022 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.283032 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.283045 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.283073 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.283084 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aab468be-bedd-4414-a0fe-79aeec7b6bcf-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.303995 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.313114 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.383866 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.383908 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-run\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.383940 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-dev\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.383976 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-var-locks-brick\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.383997 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-run" (OuterVolumeSpecName: "run") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384042 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384087 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-dev" (OuterVolumeSpecName: "dev") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384105 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-lib-modules\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384137 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-logs\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384163 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-nvme\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384192 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-config-data\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384135 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384285 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384162 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384461 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-logs" (OuterVolumeSpecName: "logs") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384223 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnbpd\" (UniqueName: \"kubernetes.io/projected/c588909b-4e7c-4123-bfea-cfe9e338de28-kube-api-access-gnbpd\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384569 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-sys\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384607 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-scripts\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384638 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-iscsi\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384655 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-sys" (OuterVolumeSpecName: "sys") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384679 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-httpd-run\") pod \"c588909b-4e7c-4123-bfea-cfe9e338de28\" (UID: \"c588909b-4e7c-4123-bfea-cfe9e338de28\") " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384690 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.384962 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.385013 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.385072 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.385087 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.385097 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.385108 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.385119 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.385130 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.385141 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.385150 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c588909b-4e7c-4123-bfea-cfe9e338de28-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.385161 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.386748 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.387633 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c588909b-4e7c-4123-bfea-cfe9e338de28-kube-api-access-gnbpd" (OuterVolumeSpecName: "kube-api-access-gnbpd") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "kube-api-access-gnbpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.387734 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.387837 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-scripts" (OuterVolumeSpecName: "scripts") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.459461 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-config-data" (OuterVolumeSpecName: "config-data") pod "c588909b-4e7c-4123-bfea-cfe9e338de28" (UID: "c588909b-4e7c-4123-bfea-cfe9e338de28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.486207 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.486241 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnbpd\" (UniqueName: \"kubernetes.io/projected/c588909b-4e7c-4123-bfea-cfe9e338de28-kube-api-access-gnbpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.486253 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c588909b-4e7c-4123-bfea-cfe9e338de28-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.486262 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c588909b-4e7c-4123-bfea-cfe9e338de28-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.486296 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.486312 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.502654 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.502700 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.587829 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.588065 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.743034 4747 generic.go:334] "Generic (PLEG): container finished" podID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerID="f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf" exitCode=0 Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.743172 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.743187 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"aab468be-bedd-4414-a0fe-79aeec7b6bcf","Type":"ContainerDied","Data":"f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf"} Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.743292 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"aab468be-bedd-4414-a0fe-79aeec7b6bcf","Type":"ContainerDied","Data":"1f6445db5bf6865b3298092725827a8ee5458a530154ce5350ef225094b2449e"} Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.743336 4747 scope.go:117] "RemoveContainer" containerID="f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.745689 4747 generic.go:334] "Generic (PLEG): container finished" podID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerID="121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de" exitCode=0 Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.745721 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c588909b-4e7c-4123-bfea-cfe9e338de28","Type":"ContainerDied","Data":"121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de"} Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.745747 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c588909b-4e7c-4123-bfea-cfe9e338de28","Type":"ContainerDied","Data":"3fcc212bad313b154c5b9ca65e4c7a6eb13e9b8372c88bb5cd4b3a5dd58c502c"} Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.745800 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.777579 4747 scope.go:117] "RemoveContainer" containerID="d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.791884 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.825156 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.834585 4747 scope.go:117] "RemoveContainer" containerID="bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.834749 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.842532 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.852171 4747 scope.go:117] "RemoveContainer" containerID="f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf" Nov 26 13:38:48 crc kubenswrapper[4747]: E1126 13:38:48.852653 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf\": container with ID starting with f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf not found: ID does not exist" containerID="f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.852686 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf"} err="failed to get container status \"f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf\": rpc error: code = NotFound desc = could not find container \"f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf\": container with ID starting with f1fee4d6cd9ed05f2f659372bc39a4fa2cc9baeaf712f01f1f2b8fb6787843cf not found: ID does not exist" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.852709 4747 scope.go:117] "RemoveContainer" containerID="d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6" Nov 26 13:38:48 crc kubenswrapper[4747]: E1126 13:38:48.852969 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6\": container with ID starting with d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6 not found: ID does not exist" containerID="d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.852993 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6"} err="failed to get container status \"d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6\": rpc error: code = NotFound desc = could not find container \"d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6\": container with ID starting with d233969c18eca5e33b2f6c90aa899b770da2abfd3d72ea1194d0a424cb2951d6 not found: ID does not exist" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.853011 4747 scope.go:117] "RemoveContainer" containerID="bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201" Nov 26 13:38:48 crc kubenswrapper[4747]: E1126 13:38:48.853288 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201\": container with ID starting with bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201 not found: ID does not exist" containerID="bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.853343 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201"} err="failed to get container status \"bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201\": rpc error: code = NotFound desc = could not find container \"bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201\": container with ID starting with bd01f8e9992809b20153174e3c42df1fea177262a90d768a675f92d3ca664201 not found: ID does not exist" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.853384 4747 scope.go:117] "RemoveContainer" containerID="121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.874099 4747 scope.go:117] "RemoveContainer" containerID="a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.901207 4747 scope.go:117] "RemoveContainer" containerID="331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.938568 4747 scope.go:117] "RemoveContainer" containerID="121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de" Nov 26 13:38:48 crc kubenswrapper[4747]: E1126 13:38:48.938938 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de\": container with ID starting with 121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de not found: ID does not exist" containerID="121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.938969 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de"} err="failed to get container status \"121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de\": rpc error: code = NotFound desc = could not find container \"121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de\": container with ID starting with 121a00d427f10a501e1b35ed7dbe6eadee6022a8e0387525ba1e3b3c18d692de not found: ID does not exist" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.938992 4747 scope.go:117] "RemoveContainer" containerID="a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba" Nov 26 13:38:48 crc kubenswrapper[4747]: E1126 13:38:48.939209 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba\": container with ID starting with a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba not found: ID does not exist" containerID="a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.939224 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba"} err="failed to get container status \"a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba\": rpc error: code = NotFound desc = could not find container \"a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba\": container with ID starting with a56843f7e154ff7749010339b1a73d7cc7c96f4fae6ec7bb8e69104fb34899ba not found: ID does not exist" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.939238 4747 scope.go:117] "RemoveContainer" containerID="331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a" Nov 26 13:38:48 crc kubenswrapper[4747]: E1126 13:38:48.939690 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a\": container with ID starting with 331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a not found: ID does not exist" containerID="331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a" Nov 26 13:38:48 crc kubenswrapper[4747]: I1126 13:38:48.939710 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a"} err="failed to get container status \"331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a\": rpc error: code = NotFound desc = could not find container \"331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a\": container with ID starting with 331c94ba6222e5d07172b59da99a565405e7a6805bbf251607528e5fdb01c23a not found: ID does not exist" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.600424 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-5w595"] Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.606959 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-5w595"] Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649397 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance6b4d-account-delete-75pxl"] Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649725 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649748 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649778 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649787 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649804 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649814 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649828 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649837 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649852 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649861 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649879 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649887 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649901 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649909 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649921 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649929 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649946 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649954 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649969 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649975 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.649986 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.649993 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.650008 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650015 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.650029 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650036 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.650044 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650068 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.650078 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650085 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.650098 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650105 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.650122 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650129 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: E1126 13:38:49.650138 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650144 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650307 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650323 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650334 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650344 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650354 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650366 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650378 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650390 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650398 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650409 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-api" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650422 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650433 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650446 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b064525-7709-473a-9a7c-5acc84a8d2f1" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650458 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e44d518-6b6c-489a-adcc-6fed567dcb88" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650470 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650479 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650489 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c739c9-6a55-431b-9de8-5a601cdb2396" containerName="glance-log" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.650500 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfff47de-64fd-46b4-9b32-930a5b93c0da" containerName="glance-httpd" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.651044 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.657926 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance6b4d-account-delete-75pxl"] Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.705520 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q8sr\" (UniqueName: \"kubernetes.io/projected/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-kube-api-access-7q8sr\") pod \"glance6b4d-account-delete-75pxl\" (UID: \"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56\") " pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.705565 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-operator-scripts\") pod \"glance6b4d-account-delete-75pxl\" (UID: \"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56\") " pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.806285 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4208ff9a-2bce-4b42-9462-f612783e629a" path="/var/lib/kubelet/pods/4208ff9a-2bce-4b42-9462-f612783e629a/volumes" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.807106 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab468be-bedd-4414-a0fe-79aeec7b6bcf" path="/var/lib/kubelet/pods/aab468be-bedd-4414-a0fe-79aeec7b6bcf/volumes" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.807663 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q8sr\" (UniqueName: \"kubernetes.io/projected/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-kube-api-access-7q8sr\") pod \"glance6b4d-account-delete-75pxl\" (UID: \"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56\") " pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.807719 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-operator-scripts\") pod \"glance6b4d-account-delete-75pxl\" (UID: \"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56\") " pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.807790 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c588909b-4e7c-4123-bfea-cfe9e338de28" path="/var/lib/kubelet/pods/c588909b-4e7c-4123-bfea-cfe9e338de28/volumes" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.808589 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-operator-scripts\") pod \"glance6b4d-account-delete-75pxl\" (UID: \"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56\") " pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" Nov 26 13:38:49 crc kubenswrapper[4747]: I1126 13:38:49.833767 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q8sr\" (UniqueName: \"kubernetes.io/projected/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-kube-api-access-7q8sr\") pod \"glance6b4d-account-delete-75pxl\" (UID: \"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56\") " pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" Nov 26 13:38:50 crc kubenswrapper[4747]: I1126 13:38:50.016386 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" Nov 26 13:38:50 crc kubenswrapper[4747]: I1126 13:38:50.305889 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance6b4d-account-delete-75pxl"] Nov 26 13:38:50 crc kubenswrapper[4747]: I1126 13:38:50.771228 4747 generic.go:334] "Generic (PLEG): container finished" podID="4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56" containerID="e9956c8ebff7b1e13b5e6d147613c3662d655767d0b3349cdc4c08bd9e9578c6" exitCode=0 Nov 26 13:38:50 crc kubenswrapper[4747]: I1126 13:38:50.771313 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" event={"ID":"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56","Type":"ContainerDied","Data":"e9956c8ebff7b1e13b5e6d147613c3662d655767d0b3349cdc4c08bd9e9578c6"} Nov 26 13:38:50 crc kubenswrapper[4747]: I1126 13:38:50.771595 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" event={"ID":"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56","Type":"ContainerStarted","Data":"40c3008cdc2a05e7e8c0e08c4ce2a13739912745029129e0a0626869e3516ed7"} Nov 26 13:38:52 crc kubenswrapper[4747]: I1126 13:38:52.075180 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" Nov 26 13:38:52 crc kubenswrapper[4747]: I1126 13:38:52.256549 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q8sr\" (UniqueName: \"kubernetes.io/projected/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-kube-api-access-7q8sr\") pod \"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56\" (UID: \"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56\") " Nov 26 13:38:52 crc kubenswrapper[4747]: I1126 13:38:52.256732 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-operator-scripts\") pod \"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56\" (UID: \"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56\") " Nov 26 13:38:52 crc kubenswrapper[4747]: I1126 13:38:52.257714 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56" (UID: "4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:38:52 crc kubenswrapper[4747]: I1126 13:38:52.263650 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-kube-api-access-7q8sr" (OuterVolumeSpecName: "kube-api-access-7q8sr") pod "4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56" (UID: "4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56"). InnerVolumeSpecName "kube-api-access-7q8sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:52 crc kubenswrapper[4747]: I1126 13:38:52.358861 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q8sr\" (UniqueName: \"kubernetes.io/projected/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-kube-api-access-7q8sr\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:52 crc kubenswrapper[4747]: I1126 13:38:52.358927 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:52 crc kubenswrapper[4747]: I1126 13:38:52.789572 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" event={"ID":"4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56","Type":"ContainerDied","Data":"40c3008cdc2a05e7e8c0e08c4ce2a13739912745029129e0a0626869e3516ed7"} Nov 26 13:38:52 crc kubenswrapper[4747]: I1126 13:38:52.789621 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40c3008cdc2a05e7e8c0e08c4ce2a13739912745029129e0a0626869e3516ed7" Nov 26 13:38:52 crc kubenswrapper[4747]: I1126 13:38:52.789681 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance6b4d-account-delete-75pxl" Nov 26 13:38:54 crc kubenswrapper[4747]: I1126 13:38:54.686429 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-lvrzx"] Nov 26 13:38:54 crc kubenswrapper[4747]: I1126 13:38:54.694503 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-lvrzx"] Nov 26 13:38:54 crc kubenswrapper[4747]: I1126 13:38:54.709591 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance6b4d-account-delete-75pxl"] Nov 26 13:38:54 crc kubenswrapper[4747]: I1126 13:38:54.715879 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc"] Nov 26 13:38:54 crc kubenswrapper[4747]: I1126 13:38:54.722197 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance6b4d-account-delete-75pxl"] Nov 26 13:38:54 crc kubenswrapper[4747]: I1126 13:38:54.725034 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-6b4d-account-create-update-j5bxc"] Nov 26 13:38:55 crc kubenswrapper[4747]: I1126 13:38:55.816279 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0894d3e0-e4ac-46c1-b040-71bb743e4403" path="/var/lib/kubelet/pods/0894d3e0-e4ac-46c1-b040-71bb743e4403/volumes" Nov 26 13:38:55 crc kubenswrapper[4747]: I1126 13:38:55.817678 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f49789e-2753-45b8-98d0-2bfb826f476f" path="/var/lib/kubelet/pods/0f49789e-2753-45b8-98d0-2bfb826f476f/volumes" Nov 26 13:38:55 crc kubenswrapper[4747]: I1126 13:38:55.818895 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56" path="/var/lib/kubelet/pods/4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56/volumes" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.695794 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-f4w87"] Nov 26 13:38:56 crc kubenswrapper[4747]: E1126 13:38:56.696319 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56" containerName="mariadb-account-delete" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.696354 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56" containerName="mariadb-account-delete" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.696714 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9b6c6f-97c6-42a6-bde4-ee8c51f9be56" containerName="mariadb-account-delete" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.697510 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-f4w87" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.704024 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-8f99-account-create-update-7tj9v"] Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.706411 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.708465 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.722637 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-8f99-account-create-update-7tj9v"] Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.751296 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-f4w87"] Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.819479 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca4be269-e8c9-435b-8a1b-7023727afd2f-operator-scripts\") pod \"glance-8f99-account-create-update-7tj9v\" (UID: \"ca4be269-e8c9-435b-8a1b-7023727afd2f\") " pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.820188 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01a88030-95bf-4b2b-814b-da3925664d66-operator-scripts\") pod \"glance-db-create-f4w87\" (UID: \"01a88030-95bf-4b2b-814b-da3925664d66\") " pod="glance-kuttl-tests/glance-db-create-f4w87" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.820279 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttl4q\" (UniqueName: \"kubernetes.io/projected/ca4be269-e8c9-435b-8a1b-7023727afd2f-kube-api-access-ttl4q\") pod \"glance-8f99-account-create-update-7tj9v\" (UID: \"ca4be269-e8c9-435b-8a1b-7023727afd2f\") " pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.820426 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbp28\" (UniqueName: \"kubernetes.io/projected/01a88030-95bf-4b2b-814b-da3925664d66-kube-api-access-nbp28\") pod \"glance-db-create-f4w87\" (UID: \"01a88030-95bf-4b2b-814b-da3925664d66\") " pod="glance-kuttl-tests/glance-db-create-f4w87" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.921302 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01a88030-95bf-4b2b-814b-da3925664d66-operator-scripts\") pod \"glance-db-create-f4w87\" (UID: \"01a88030-95bf-4b2b-814b-da3925664d66\") " pod="glance-kuttl-tests/glance-db-create-f4w87" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.921367 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttl4q\" (UniqueName: \"kubernetes.io/projected/ca4be269-e8c9-435b-8a1b-7023727afd2f-kube-api-access-ttl4q\") pod \"glance-8f99-account-create-update-7tj9v\" (UID: \"ca4be269-e8c9-435b-8a1b-7023727afd2f\") " pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.921477 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbp28\" (UniqueName: \"kubernetes.io/projected/01a88030-95bf-4b2b-814b-da3925664d66-kube-api-access-nbp28\") pod \"glance-db-create-f4w87\" (UID: \"01a88030-95bf-4b2b-814b-da3925664d66\") " pod="glance-kuttl-tests/glance-db-create-f4w87" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.921586 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca4be269-e8c9-435b-8a1b-7023727afd2f-operator-scripts\") pod \"glance-8f99-account-create-update-7tj9v\" (UID: \"ca4be269-e8c9-435b-8a1b-7023727afd2f\") " pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.922137 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01a88030-95bf-4b2b-814b-da3925664d66-operator-scripts\") pod \"glance-db-create-f4w87\" (UID: \"01a88030-95bf-4b2b-814b-da3925664d66\") " pod="glance-kuttl-tests/glance-db-create-f4w87" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.923033 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca4be269-e8c9-435b-8a1b-7023727afd2f-operator-scripts\") pod \"glance-8f99-account-create-update-7tj9v\" (UID: \"ca4be269-e8c9-435b-8a1b-7023727afd2f\") " pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.951860 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttl4q\" (UniqueName: \"kubernetes.io/projected/ca4be269-e8c9-435b-8a1b-7023727afd2f-kube-api-access-ttl4q\") pod \"glance-8f99-account-create-update-7tj9v\" (UID: \"ca4be269-e8c9-435b-8a1b-7023727afd2f\") " pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" Nov 26 13:38:56 crc kubenswrapper[4747]: I1126 13:38:56.953816 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbp28\" (UniqueName: \"kubernetes.io/projected/01a88030-95bf-4b2b-814b-da3925664d66-kube-api-access-nbp28\") pod \"glance-db-create-f4w87\" (UID: \"01a88030-95bf-4b2b-814b-da3925664d66\") " pod="glance-kuttl-tests/glance-db-create-f4w87" Nov 26 13:38:57 crc kubenswrapper[4747]: I1126 13:38:57.019038 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-f4w87" Nov 26 13:38:57 crc kubenswrapper[4747]: I1126 13:38:57.025456 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" Nov 26 13:38:57 crc kubenswrapper[4747]: I1126 13:38:57.455221 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-8f99-account-create-update-7tj9v"] Nov 26 13:38:57 crc kubenswrapper[4747]: I1126 13:38:57.494157 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-f4w87"] Nov 26 13:38:57 crc kubenswrapper[4747]: W1126 13:38:57.498537 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a88030_95bf_4b2b_814b_da3925664d66.slice/crio-100ad44ef52408883bb2bbef5d3482a8fb50aa95fd0429a34f823c246f9ba614 WatchSource:0}: Error finding container 100ad44ef52408883bb2bbef5d3482a8fb50aa95fd0429a34f823c246f9ba614: Status 404 returned error can't find the container with id 100ad44ef52408883bb2bbef5d3482a8fb50aa95fd0429a34f823c246f9ba614 Nov 26 13:38:57 crc kubenswrapper[4747]: I1126 13:38:57.838970 4747 generic.go:334] "Generic (PLEG): container finished" podID="01a88030-95bf-4b2b-814b-da3925664d66" containerID="2900c5a89dc941d8beafed5154b9ebeec5c0a662189c2b0cb6f25cd3f8e0a21f" exitCode=0 Nov 26 13:38:57 crc kubenswrapper[4747]: I1126 13:38:57.839028 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-f4w87" event={"ID":"01a88030-95bf-4b2b-814b-da3925664d66","Type":"ContainerDied","Data":"2900c5a89dc941d8beafed5154b9ebeec5c0a662189c2b0cb6f25cd3f8e0a21f"} Nov 26 13:38:57 crc kubenswrapper[4747]: I1126 13:38:57.839433 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-f4w87" event={"ID":"01a88030-95bf-4b2b-814b-da3925664d66","Type":"ContainerStarted","Data":"100ad44ef52408883bb2bbef5d3482a8fb50aa95fd0429a34f823c246f9ba614"} Nov 26 13:38:57 crc kubenswrapper[4747]: I1126 13:38:57.841498 4747 generic.go:334] "Generic (PLEG): container finished" podID="ca4be269-e8c9-435b-8a1b-7023727afd2f" containerID="822648aff5df9a047f56824773596120f2f81c70a9cc286954018bfad021d673" exitCode=0 Nov 26 13:38:57 crc kubenswrapper[4747]: I1126 13:38:57.841537 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" event={"ID":"ca4be269-e8c9-435b-8a1b-7023727afd2f","Type":"ContainerDied","Data":"822648aff5df9a047f56824773596120f2f81c70a9cc286954018bfad021d673"} Nov 26 13:38:57 crc kubenswrapper[4747]: I1126 13:38:57.841562 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" event={"ID":"ca4be269-e8c9-435b-8a1b-7023727afd2f","Type":"ContainerStarted","Data":"4116711f8a821286fc0fc20195ee16c84a949cde427ed5a01487a2f5e6fa86de"} Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.194788 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-f4w87" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.199676 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.358642 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttl4q\" (UniqueName: \"kubernetes.io/projected/ca4be269-e8c9-435b-8a1b-7023727afd2f-kube-api-access-ttl4q\") pod \"ca4be269-e8c9-435b-8a1b-7023727afd2f\" (UID: \"ca4be269-e8c9-435b-8a1b-7023727afd2f\") " Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.358720 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca4be269-e8c9-435b-8a1b-7023727afd2f-operator-scripts\") pod \"ca4be269-e8c9-435b-8a1b-7023727afd2f\" (UID: \"ca4be269-e8c9-435b-8a1b-7023727afd2f\") " Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.358896 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01a88030-95bf-4b2b-814b-da3925664d66-operator-scripts\") pod \"01a88030-95bf-4b2b-814b-da3925664d66\" (UID: \"01a88030-95bf-4b2b-814b-da3925664d66\") " Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.359040 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbp28\" (UniqueName: \"kubernetes.io/projected/01a88030-95bf-4b2b-814b-da3925664d66-kube-api-access-nbp28\") pod \"01a88030-95bf-4b2b-814b-da3925664d66\" (UID: \"01a88030-95bf-4b2b-814b-da3925664d66\") " Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.359635 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a88030-95bf-4b2b-814b-da3925664d66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01a88030-95bf-4b2b-814b-da3925664d66" (UID: "01a88030-95bf-4b2b-814b-da3925664d66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.359957 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4be269-e8c9-435b-8a1b-7023727afd2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca4be269-e8c9-435b-8a1b-7023727afd2f" (UID: "ca4be269-e8c9-435b-8a1b-7023727afd2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.364184 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4be269-e8c9-435b-8a1b-7023727afd2f-kube-api-access-ttl4q" (OuterVolumeSpecName: "kube-api-access-ttl4q") pod "ca4be269-e8c9-435b-8a1b-7023727afd2f" (UID: "ca4be269-e8c9-435b-8a1b-7023727afd2f"). InnerVolumeSpecName "kube-api-access-ttl4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.365254 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a88030-95bf-4b2b-814b-da3925664d66-kube-api-access-nbp28" (OuterVolumeSpecName: "kube-api-access-nbp28") pod "01a88030-95bf-4b2b-814b-da3925664d66" (UID: "01a88030-95bf-4b2b-814b-da3925664d66"). InnerVolumeSpecName "kube-api-access-nbp28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.460383 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbp28\" (UniqueName: \"kubernetes.io/projected/01a88030-95bf-4b2b-814b-da3925664d66-kube-api-access-nbp28\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.460423 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttl4q\" (UniqueName: \"kubernetes.io/projected/ca4be269-e8c9-435b-8a1b-7023727afd2f-kube-api-access-ttl4q\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.460439 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca4be269-e8c9-435b-8a1b-7023727afd2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.460451 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01a88030-95bf-4b2b-814b-da3925664d66-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.862252 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-f4w87" event={"ID":"01a88030-95bf-4b2b-814b-da3925664d66","Type":"ContainerDied","Data":"100ad44ef52408883bb2bbef5d3482a8fb50aa95fd0429a34f823c246f9ba614"} Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.862313 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="100ad44ef52408883bb2bbef5d3482a8fb50aa95fd0429a34f823c246f9ba614" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.862284 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-f4w87" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.864691 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" event={"ID":"ca4be269-e8c9-435b-8a1b-7023727afd2f","Type":"ContainerDied","Data":"4116711f8a821286fc0fc20195ee16c84a949cde427ed5a01487a2f5e6fa86de"} Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.864730 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4116711f8a821286fc0fc20195ee16c84a949cde427ed5a01487a2f5e6fa86de" Nov 26 13:38:59 crc kubenswrapper[4747]: I1126 13:38:59.864808 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-8f99-account-create-update-7tj9v" Nov 26 13:39:01 crc kubenswrapper[4747]: I1126 13:39:01.944035 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-sh7fn"] Nov 26 13:39:01 crc kubenswrapper[4747]: E1126 13:39:01.944673 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a88030-95bf-4b2b-814b-da3925664d66" containerName="mariadb-database-create" Nov 26 13:39:01 crc kubenswrapper[4747]: I1126 13:39:01.944687 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a88030-95bf-4b2b-814b-da3925664d66" containerName="mariadb-database-create" Nov 26 13:39:01 crc kubenswrapper[4747]: E1126 13:39:01.944754 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4be269-e8c9-435b-8a1b-7023727afd2f" containerName="mariadb-account-create-update" Nov 26 13:39:01 crc kubenswrapper[4747]: I1126 13:39:01.944763 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4be269-e8c9-435b-8a1b-7023727afd2f" containerName="mariadb-account-create-update" Nov 26 13:39:01 crc kubenswrapper[4747]: I1126 13:39:01.944929 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4be269-e8c9-435b-8a1b-7023727afd2f" containerName="mariadb-account-create-update" Nov 26 13:39:01 crc kubenswrapper[4747]: I1126 13:39:01.944945 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a88030-95bf-4b2b-814b-da3925664d66" containerName="mariadb-database-create" Nov 26 13:39:01 crc kubenswrapper[4747]: I1126 13:39:01.945461 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:01 crc kubenswrapper[4747]: I1126 13:39:01.948818 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 26 13:39:01 crc kubenswrapper[4747]: I1126 13:39:01.948932 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-lm895" Nov 26 13:39:01 crc kubenswrapper[4747]: I1126 13:39:01.979459 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-sh7fn"] Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.100340 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-db-sync-config-data\") pod \"glance-db-sync-sh7fn\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.100388 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpmd6\" (UniqueName: \"kubernetes.io/projected/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-kube-api-access-dpmd6\") pod \"glance-db-sync-sh7fn\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.100596 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-config-data\") pod \"glance-db-sync-sh7fn\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.202695 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-db-sync-config-data\") pod \"glance-db-sync-sh7fn\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.202773 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpmd6\" (UniqueName: \"kubernetes.io/projected/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-kube-api-access-dpmd6\") pod \"glance-db-sync-sh7fn\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.202828 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-config-data\") pod \"glance-db-sync-sh7fn\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.208312 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-config-data\") pod \"glance-db-sync-sh7fn\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.209193 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-db-sync-config-data\") pod \"glance-db-sync-sh7fn\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.221919 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpmd6\" (UniqueName: \"kubernetes.io/projected/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-kube-api-access-dpmd6\") pod \"glance-db-sync-sh7fn\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.267414 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.706320 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-sh7fn"] Nov 26 13:39:02 crc kubenswrapper[4747]: I1126 13:39:02.893205 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-sh7fn" event={"ID":"02e61bfb-6cd0-4c8a-abbf-b836a6abb976","Type":"ContainerStarted","Data":"9c475e7c49e2fced3dc2f49c8e3ecc2accf23795cc0cc4fd9f78025c867f28ef"} Nov 26 13:39:03 crc kubenswrapper[4747]: I1126 13:39:03.900807 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-sh7fn" event={"ID":"02e61bfb-6cd0-4c8a-abbf-b836a6abb976","Type":"ContainerStarted","Data":"e706477bb894df737ffd1dd9b6bfd6d51e63ad14d544d1df8d54207434b52f93"} Nov 26 13:39:03 crc kubenswrapper[4747]: I1126 13:39:03.925122 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-sh7fn" podStartSLOduration=2.925105832 podStartE2EDuration="2.925105832s" podCreationTimestamp="2025-11-26 13:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:39:03.919280067 +0000 UTC m=+1430.905591122" watchObservedRunningTime="2025-11-26 13:39:03.925105832 +0000 UTC m=+1430.911416847" Nov 26 13:39:06 crc kubenswrapper[4747]: I1126 13:39:06.922977 4747 generic.go:334] "Generic (PLEG): container finished" podID="02e61bfb-6cd0-4c8a-abbf-b836a6abb976" containerID="e706477bb894df737ffd1dd9b6bfd6d51e63ad14d544d1df8d54207434b52f93" exitCode=0 Nov 26 13:39:06 crc kubenswrapper[4747]: I1126 13:39:06.923204 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-sh7fn" event={"ID":"02e61bfb-6cd0-4c8a-abbf-b836a6abb976","Type":"ContainerDied","Data":"e706477bb894df737ffd1dd9b6bfd6d51e63ad14d544d1df8d54207434b52f93"} Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.402180 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.595543 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-db-sync-config-data\") pod \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.595875 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpmd6\" (UniqueName: \"kubernetes.io/projected/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-kube-api-access-dpmd6\") pod \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.595918 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-config-data\") pod \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\" (UID: \"02e61bfb-6cd0-4c8a-abbf-b836a6abb976\") " Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.600757 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-kube-api-access-dpmd6" (OuterVolumeSpecName: "kube-api-access-dpmd6") pod "02e61bfb-6cd0-4c8a-abbf-b836a6abb976" (UID: "02e61bfb-6cd0-4c8a-abbf-b836a6abb976"). InnerVolumeSpecName "kube-api-access-dpmd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.604334 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "02e61bfb-6cd0-4c8a-abbf-b836a6abb976" (UID: "02e61bfb-6cd0-4c8a-abbf-b836a6abb976"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.633420 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-config-data" (OuterVolumeSpecName: "config-data") pod "02e61bfb-6cd0-4c8a-abbf-b836a6abb976" (UID: "02e61bfb-6cd0-4c8a-abbf-b836a6abb976"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.697110 4747 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.697142 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpmd6\" (UniqueName: \"kubernetes.io/projected/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-kube-api-access-dpmd6\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.697152 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e61bfb-6cd0-4c8a-abbf-b836a6abb976-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.940889 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-sh7fn" event={"ID":"02e61bfb-6cd0-4c8a-abbf-b836a6abb976","Type":"ContainerDied","Data":"9c475e7c49e2fced3dc2f49c8e3ecc2accf23795cc0cc4fd9f78025c867f28ef"} Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.941212 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c475e7c49e2fced3dc2f49c8e3ecc2accf23795cc0cc4fd9f78025c867f28ef" Nov 26 13:39:08 crc kubenswrapper[4747]: I1126 13:39:08.940993 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-sh7fn" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.005374 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:39:10 crc kubenswrapper[4747]: E1126 13:39:10.005641 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e61bfb-6cd0-4c8a-abbf-b836a6abb976" containerName="glance-db-sync" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.005653 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e61bfb-6cd0-4c8a-abbf-b836a6abb976" containerName="glance-db-sync" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.005787 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e61bfb-6cd0-4c8a-abbf-b836a6abb976" containerName="glance-db-sync" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.006462 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.008612 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.008627 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.010425 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-lm895" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.029536 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113676 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113732 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113760 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-sys\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113781 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113799 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-run\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113818 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113841 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113869 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113906 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scfk7\" (UniqueName: \"kubernetes.io/projected/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-kube-api-access-scfk7\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113944 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-dev\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113966 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-logs\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.113994 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.114021 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.114048 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215080 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215185 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215216 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215241 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-sys\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215262 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215281 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-run\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215301 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215325 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215355 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215394 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scfk7\" (UniqueName: \"kubernetes.io/projected/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-kube-api-access-scfk7\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215440 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-dev\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215465 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-logs\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215495 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215511 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.215595 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.216039 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.216202 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.216308 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-sys\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.216333 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.216339 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-run\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.216399 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.216410 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-dev\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.216448 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.216853 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-logs\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.219976 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.221491 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.223552 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.245834 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.246152 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scfk7\" (UniqueName: \"kubernetes.io/projected/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-kube-api-access-scfk7\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.250797 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.276379 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.277530 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.279416 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.292525 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.322976 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.425838 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426164 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426195 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426248 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426271 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhk7\" (UniqueName: \"kubernetes.io/projected/004c87ae-cecd-4706-97a5-3d2ad495474d-kube-api-access-4rhk7\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426297 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426321 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-dev\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426343 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-sys\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426371 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426409 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426427 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426447 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-logs\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426487 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.426519 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-run\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.527972 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-run\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528024 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528045 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528099 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528131 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528185 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528146 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528132 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-run\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528232 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rhk7\" (UniqueName: \"kubernetes.io/projected/004c87ae-cecd-4706-97a5-3d2ad495474d-kube-api-access-4rhk7\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528398 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528482 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-dev\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528508 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-sys\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528583 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528663 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528685 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528715 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-logs\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528753 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528765 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.528935 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-dev\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.529015 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.529659 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-logs\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.529047 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-sys\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.529717 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.529733 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.529791 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.533111 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.533574 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.552197 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rhk7\" (UniqueName: \"kubernetes.io/projected/004c87ae-cecd-4706-97a5-3d2ad495474d-kube-api-access-4rhk7\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.552774 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.556050 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.608569 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.774664 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.846114 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.955453 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905","Type":"ContainerStarted","Data":"3685d22bbc37e203433cbdce05090507786bbfdf8f3cb59faf534bf13dd86e45"} Nov 26 13:39:10 crc kubenswrapper[4747]: I1126 13:39:10.957566 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"004c87ae-cecd-4706-97a5-3d2ad495474d","Type":"ContainerStarted","Data":"bc387b70a27f14b1412c7f041edb21e7714430efbed577d2aac27043b6e6609e"} Nov 26 13:39:11 crc kubenswrapper[4747]: I1126 13:39:11.125435 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:11 crc kubenswrapper[4747]: I1126 13:39:11.965002 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"004c87ae-cecd-4706-97a5-3d2ad495474d","Type":"ContainerStarted","Data":"cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002"} Nov 26 13:39:11 crc kubenswrapper[4747]: I1126 13:39:11.965369 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"004c87ae-cecd-4706-97a5-3d2ad495474d","Type":"ContainerStarted","Data":"650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8"} Nov 26 13:39:11 crc kubenswrapper[4747]: I1126 13:39:11.967270 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905","Type":"ContainerStarted","Data":"8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c"} Nov 26 13:39:11 crc kubenswrapper[4747]: I1126 13:39:11.967292 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905","Type":"ContainerStarted","Data":"311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20"} Nov 26 13:39:11 crc kubenswrapper[4747]: I1126 13:39:11.992376 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.9923541030000003 podStartE2EDuration="2.992354103s" podCreationTimestamp="2025-11-26 13:39:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:39:11.985210575 +0000 UTC m=+1438.971521610" watchObservedRunningTime="2025-11-26 13:39:11.992354103 +0000 UTC m=+1438.978665128" Nov 26 13:39:12 crc kubenswrapper[4747]: I1126 13:39:12.019465 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=3.019448258 podStartE2EDuration="3.019448258s" podCreationTimestamp="2025-11-26 13:39:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:39:12.017839098 +0000 UTC m=+1439.004150123" watchObservedRunningTime="2025-11-26 13:39:12.019448258 +0000 UTC m=+1439.005759293" Nov 26 13:39:12 crc kubenswrapper[4747]: I1126 13:39:12.976274 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="004c87ae-cecd-4706-97a5-3d2ad495474d" containerName="glance-log" containerID="cri-o://650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8" gracePeriod=30 Nov 26 13:39:12 crc kubenswrapper[4747]: I1126 13:39:12.976328 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="004c87ae-cecd-4706-97a5-3d2ad495474d" containerName="glance-httpd" containerID="cri-o://cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002" gracePeriod=30 Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.407859 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.466667 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rhk7\" (UniqueName: \"kubernetes.io/projected/004c87ae-cecd-4706-97a5-3d2ad495474d-kube-api-access-4rhk7\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467092 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467139 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-var-locks-brick\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467168 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-sys\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467199 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-run\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467229 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-logs\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467236 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467286 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-run" (OuterVolumeSpecName: "run") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467304 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-lib-modules\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467307 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-sys" (OuterVolumeSpecName: "sys") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467328 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-iscsi\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467353 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-config-data\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467382 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-dev\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467356 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467374 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467406 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467491 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-logs" (OuterVolumeSpecName: "logs") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467586 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-scripts\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467581 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-dev" (OuterVolumeSpecName: "dev") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467656 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-httpd-run\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.467696 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-nvme\") pod \"004c87ae-cecd-4706-97a5-3d2ad495474d\" (UID: \"004c87ae-cecd-4706-97a5-3d2ad495474d\") " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.468264 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.468363 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.468472 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.468506 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.468523 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.468537 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.468553 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.468571 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.468586 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.468600 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/004c87ae-cecd-4706-97a5-3d2ad495474d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.472933 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.474038 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/004c87ae-cecd-4706-97a5-3d2ad495474d-kube-api-access-4rhk7" (OuterVolumeSpecName: "kube-api-access-4rhk7") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "kube-api-access-4rhk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.476964 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.479586 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-scripts" (OuterVolumeSpecName: "scripts") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.509025 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-config-data" (OuterVolumeSpecName: "config-data") pod "004c87ae-cecd-4706-97a5-3d2ad495474d" (UID: "004c87ae-cecd-4706-97a5-3d2ad495474d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.570534 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.570594 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.570609 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004c87ae-cecd-4706-97a5-3d2ad495474d-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.570621 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/004c87ae-cecd-4706-97a5-3d2ad495474d-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.570633 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rhk7\" (UniqueName: \"kubernetes.io/projected/004c87ae-cecd-4706-97a5-3d2ad495474d-kube-api-access-4rhk7\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.570723 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.588978 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.596458 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.673899 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.673944 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.984289 4747 generic.go:334] "Generic (PLEG): container finished" podID="004c87ae-cecd-4706-97a5-3d2ad495474d" containerID="cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002" exitCode=0 Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.984327 4747 generic.go:334] "Generic (PLEG): container finished" podID="004c87ae-cecd-4706-97a5-3d2ad495474d" containerID="650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8" exitCode=143 Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.984349 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"004c87ae-cecd-4706-97a5-3d2ad495474d","Type":"ContainerDied","Data":"cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002"} Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.984377 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"004c87ae-cecd-4706-97a5-3d2ad495474d","Type":"ContainerDied","Data":"650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8"} Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.984389 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"004c87ae-cecd-4706-97a5-3d2ad495474d","Type":"ContainerDied","Data":"bc387b70a27f14b1412c7f041edb21e7714430efbed577d2aac27043b6e6609e"} Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.984408 4747 scope.go:117] "RemoveContainer" containerID="cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002" Nov 26 13:39:13 crc kubenswrapper[4747]: I1126 13:39:13.984543 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.013525 4747 scope.go:117] "RemoveContainer" containerID="650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.030101 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.034243 4747 scope.go:117] "RemoveContainer" containerID="cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002" Nov 26 13:39:14 crc kubenswrapper[4747]: E1126 13:39:14.034789 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002\": container with ID starting with cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002 not found: ID does not exist" containerID="cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.034896 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002"} err="failed to get container status \"cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002\": rpc error: code = NotFound desc = could not find container \"cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002\": container with ID starting with cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002 not found: ID does not exist" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.034986 4747 scope.go:117] "RemoveContainer" containerID="650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8" Nov 26 13:39:14 crc kubenswrapper[4747]: E1126 13:39:14.035401 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8\": container with ID starting with 650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8 not found: ID does not exist" containerID="650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.035440 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8"} err="failed to get container status \"650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8\": rpc error: code = NotFound desc = could not find container \"650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8\": container with ID starting with 650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8 not found: ID does not exist" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.035460 4747 scope.go:117] "RemoveContainer" containerID="cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.036441 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002"} err="failed to get container status \"cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002\": rpc error: code = NotFound desc = could not find container \"cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002\": container with ID starting with cd549eb6384a1cbe7dc06d82c782d9ec275691fe4ad7d98bc2469f841800d002 not found: ID does not exist" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.036462 4747 scope.go:117] "RemoveContainer" containerID="650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.036701 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8"} err="failed to get container status \"650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8\": rpc error: code = NotFound desc = could not find container \"650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8\": container with ID starting with 650e08df9c4cfadb4959494fc789e75db7d6f3f1f066d76776332c940039a4c8 not found: ID does not exist" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.041524 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.069630 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:14 crc kubenswrapper[4747]: E1126 13:39:14.069975 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004c87ae-cecd-4706-97a5-3d2ad495474d" containerName="glance-httpd" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.069999 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="004c87ae-cecd-4706-97a5-3d2ad495474d" containerName="glance-httpd" Nov 26 13:39:14 crc kubenswrapper[4747]: E1126 13:39:14.070019 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004c87ae-cecd-4706-97a5-3d2ad495474d" containerName="glance-log" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.070028 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="004c87ae-cecd-4706-97a5-3d2ad495474d" containerName="glance-log" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.070190 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="004c87ae-cecd-4706-97a5-3d2ad495474d" containerName="glance-httpd" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.070209 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="004c87ae-cecd-4706-97a5-3d2ad495474d" containerName="glance-log" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.071107 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.076182 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.104352 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179698 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179745 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-sys\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179764 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179796 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179827 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179845 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179859 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-dev\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179884 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-run\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179902 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179923 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179939 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-logs\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179955 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179973 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.179991 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcbfk\" (UniqueName: \"kubernetes.io/projected/17a955ae-240d-4709-94a7-474f45be1d46-kube-api-access-xcbfk\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281284 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-sys\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281325 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281371 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281411 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281414 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-sys\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281442 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281464 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-dev\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281477 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281493 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281523 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-run\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281497 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-run\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281545 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-dev\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281565 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281596 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281626 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-logs\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281650 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281674 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281698 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcbfk\" (UniqueName: \"kubernetes.io/projected/17a955ae-240d-4709-94a7-474f45be1d46-kube-api-access-xcbfk\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281725 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281746 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281811 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.282032 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.281597 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.284298 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.284535 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-logs\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.287209 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.288761 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.307973 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcbfk\" (UniqueName: \"kubernetes.io/projected/17a955ae-240d-4709-94a7-474f45be1d46-kube-api-access-xcbfk\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.308717 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.310325 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.386591 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.851737 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:14 crc kubenswrapper[4747]: W1126 13:39:14.853186 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17a955ae_240d_4709_94a7_474f45be1d46.slice/crio-2b875406b551519d194a1def75364768d5d847d10705bfc19ab812bb5ea86eb5 WatchSource:0}: Error finding container 2b875406b551519d194a1def75364768d5d847d10705bfc19ab812bb5ea86eb5: Status 404 returned error can't find the container with id 2b875406b551519d194a1def75364768d5d847d10705bfc19ab812bb5ea86eb5 Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.996878 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"17a955ae-240d-4709-94a7-474f45be1d46","Type":"ContainerStarted","Data":"ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f"} Nov 26 13:39:14 crc kubenswrapper[4747]: I1126 13:39:14.996924 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"17a955ae-240d-4709-94a7-474f45be1d46","Type":"ContainerStarted","Data":"2b875406b551519d194a1def75364768d5d847d10705bfc19ab812bb5ea86eb5"} Nov 26 13:39:15 crc kubenswrapper[4747]: I1126 13:39:15.809324 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="004c87ae-cecd-4706-97a5-3d2ad495474d" path="/var/lib/kubelet/pods/004c87ae-cecd-4706-97a5-3d2ad495474d/volumes" Nov 26 13:39:16 crc kubenswrapper[4747]: I1126 13:39:16.005624 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"17a955ae-240d-4709-94a7-474f45be1d46","Type":"ContainerStarted","Data":"898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026"} Nov 26 13:39:16 crc kubenswrapper[4747]: I1126 13:39:16.045325 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.04530473 podStartE2EDuration="2.04530473s" podCreationTimestamp="2025-11-26 13:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:39:16.034425399 +0000 UTC m=+1443.020736504" watchObservedRunningTime="2025-11-26 13:39:16.04530473 +0000 UTC m=+1443.031615735" Nov 26 13:39:20 crc kubenswrapper[4747]: I1126 13:39:20.323192 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:20 crc kubenswrapper[4747]: I1126 13:39:20.323689 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:20 crc kubenswrapper[4747]: I1126 13:39:20.358428 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:20 crc kubenswrapper[4747]: I1126 13:39:20.372923 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:21 crc kubenswrapper[4747]: I1126 13:39:21.039038 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:21 crc kubenswrapper[4747]: I1126 13:39:21.039101 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:22 crc kubenswrapper[4747]: I1126 13:39:22.909331 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:22 crc kubenswrapper[4747]: I1126 13:39:22.912922 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:24 crc kubenswrapper[4747]: I1126 13:39:24.387730 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:24 crc kubenswrapper[4747]: I1126 13:39:24.390926 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:24 crc kubenswrapper[4747]: I1126 13:39:24.415768 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:24 crc kubenswrapper[4747]: I1126 13:39:24.427931 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:25 crc kubenswrapper[4747]: I1126 13:39:25.065354 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:25 crc kubenswrapper[4747]: I1126 13:39:25.065440 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:26 crc kubenswrapper[4747]: I1126 13:39:26.996416 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:27 crc kubenswrapper[4747]: I1126 13:39:27.024294 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.267186 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.269080 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.271904 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.273158 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.286764 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.312338 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.312494 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.323453 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337408 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-scripts\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337527 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-dev\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337563 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6ts\" (UniqueName: \"kubernetes.io/projected/c62715dc-7831-4903-bd3b-65e1d8cbcd58-kube-api-access-2m6ts\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337637 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337689 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-config-data\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337762 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337819 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-logs\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337845 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337879 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-run\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337920 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-sys\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337964 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.337990 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.420218 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.422021 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.429510 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.430876 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.436989 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443032 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-dev\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443048 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443100 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443497 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443535 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-run\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443563 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-logs\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443585 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443599 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-scripts\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443616 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443638 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-run\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443668 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-sys\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443695 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443716 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443805 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443849 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443873 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443888 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443919 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443946 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-sys\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443965 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-scripts\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443990 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.444024 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.444076 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-dev\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.444090 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-config-data\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.444107 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m6ts\" (UniqueName: \"kubernetes.io/projected/c62715dc-7831-4903-bd3b-65e1d8cbcd58-kube-api-access-2m6ts\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.444138 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-logs\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.444164 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchb7\" (UniqueName: \"kubernetes.io/projected/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-kube-api-access-gchb7\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.444184 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.444218 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-config-data\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.443145 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.447324 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") device mount path \"/mnt/openstack/pv14\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.447479 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-dev\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.447554 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-run\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.447574 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-logs\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.447613 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-sys\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.447653 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.447725 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.447812 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.447926 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.447947 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.455062 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-scripts\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.464046 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-config-data\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.466752 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.472200 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m6ts\" (UniqueName: \"kubernetes.io/projected/c62715dc-7831-4903-bd3b-65e1d8cbcd58-kube-api-access-2m6ts\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.472950 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-1\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545472 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545520 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545549 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545575 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-dev\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545697 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-dev\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545715 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545784 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-sys\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545808 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-sys\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545826 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-run\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545846 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545871 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-scripts\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545874 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.545897 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546017 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546025 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546078 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-run\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546245 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546289 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546322 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546352 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546380 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546400 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546420 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-scripts\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546448 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-logs\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546466 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52fcr\" (UniqueName: \"kubernetes.io/projected/6ac0196c-4bba-45fc-8222-956473419c32-kube-api-access-52fcr\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546488 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-sys\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546519 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-logs\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546541 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546587 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546615 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546642 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546681 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-run\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546707 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546742 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-dev\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546770 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z22c\" (UniqueName: \"kubernetes.io/projected/9cb1df98-acc9-4cdd-af43-fe79dd497ede-kube-api-access-5z22c\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546802 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-config-data\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546830 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-config-data\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546862 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-scripts\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546895 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546921 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546949 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-logs\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546974 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-run\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.546996 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.547033 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchb7\" (UniqueName: \"kubernetes.io/projected/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-kube-api-access-gchb7\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.547092 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-dev\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.547119 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.547150 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-config-data\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.547371 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-sys\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.547441 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.547611 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.547769 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.548732 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.548799 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-logs\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.551358 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-scripts\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.554942 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-config-data\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.567456 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gchb7\" (UniqueName: \"kubernetes.io/projected/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-kube-api-access-gchb7\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.568962 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.575151 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-external-api-2\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.638931 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.648744 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-scripts\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.648836 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-logs\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.648863 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52fcr\" (UniqueName: \"kubernetes.io/projected/6ac0196c-4bba-45fc-8222-956473419c32-kube-api-access-52fcr\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.648891 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-logs\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.648913 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.648934 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.648964 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.648985 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-run\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649005 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-dev\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649026 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z22c\" (UniqueName: \"kubernetes.io/projected/9cb1df98-acc9-4cdd-af43-fe79dd497ede-kube-api-access-5z22c\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649046 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-config-data\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649083 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-scripts\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649108 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649125 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649143 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-run\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649157 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649179 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-dev\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649206 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649228 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-config-data\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649247 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649267 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649286 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649314 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-sys\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649330 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-sys\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649347 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649382 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649401 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649446 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649541 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649960 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650287 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-run\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650305 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-sys\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650347 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-dev\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650351 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650312 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-logs\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650378 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-sys\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650406 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650434 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650456 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650486 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650697 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650785 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650777 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650789 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-logs\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650860 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650885 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-run\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650916 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.650914 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.649968 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") device mount path \"/mnt/openstack/pv07\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.651393 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-dev\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.653625 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.654561 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-scripts\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.658126 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-config-data\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.658494 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-config-data\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.662573 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-scripts\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.674310 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52fcr\" (UniqueName: \"kubernetes.io/projected/6ac0196c-4bba-45fc-8222-956473419c32-kube-api-access-52fcr\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.679419 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.679965 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z22c\" (UniqueName: \"kubernetes.io/projected/9cb1df98-acc9-4cdd-af43-fe79dd497ede-kube-api-access-5z22c\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.696340 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.704356 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-1\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.711792 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.755464 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:29 crc kubenswrapper[4747]: I1126 13:39:29.822984 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:30 crc kubenswrapper[4747]: I1126 13:39:30.092191 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:39:30 crc kubenswrapper[4747]: W1126 13:39:30.097785 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9b45e02_39f8_40ef_9d98_cf4ca2e7a7b8.slice/crio-61316e35b43e859234328d675b1afffe00e013e89200113d8e2ce3be59e92f46 WatchSource:0}: Error finding container 61316e35b43e859234328d675b1afffe00e013e89200113d8e2ce3be59e92f46: Status 404 returned error can't find the container with id 61316e35b43e859234328d675b1afffe00e013e89200113d8e2ce3be59e92f46 Nov 26 13:39:30 crc kubenswrapper[4747]: I1126 13:39:30.168589 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:39:30 crc kubenswrapper[4747]: W1126 13:39:30.172678 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc62715dc_7831_4903_bd3b_65e1d8cbcd58.slice/crio-b6827cdce6d81db264381c3c098df66d4ac1edcc1a7a6ae4d039d311da13409c WatchSource:0}: Error finding container b6827cdce6d81db264381c3c098df66d4ac1edcc1a7a6ae4d039d311da13409c: Status 404 returned error can't find the container with id b6827cdce6d81db264381c3c098df66d4ac1edcc1a7a6ae4d039d311da13409c Nov 26 13:39:30 crc kubenswrapper[4747]: I1126 13:39:30.272150 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:39:30 crc kubenswrapper[4747]: W1126 13:39:30.275338 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cb1df98_acc9_4cdd_af43_fe79dd497ede.slice/crio-d47477d9bc2fa7d83b9227f8e64cad19476038576d619bc67b538ccd1dd77c19 WatchSource:0}: Error finding container d47477d9bc2fa7d83b9227f8e64cad19476038576d619bc67b538ccd1dd77c19: Status 404 returned error can't find the container with id d47477d9bc2fa7d83b9227f8e64cad19476038576d619bc67b538ccd1dd77c19 Nov 26 13:39:30 crc kubenswrapper[4747]: I1126 13:39:30.281588 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:39:30 crc kubenswrapper[4747]: W1126 13:39:30.281993 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ac0196c_4bba_45fc_8222_956473419c32.slice/crio-eb94d3c35f998b20de33e1b59234d333a2aabe9d1bb90bb82ce562f177c6e588 WatchSource:0}: Error finding container eb94d3c35f998b20de33e1b59234d333a2aabe9d1bb90bb82ce562f177c6e588: Status 404 returned error can't find the container with id eb94d3c35f998b20de33e1b59234d333a2aabe9d1bb90bb82ce562f177c6e588 Nov 26 13:39:31 crc kubenswrapper[4747]: I1126 13:39:31.115937 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6ac0196c-4bba-45fc-8222-956473419c32","Type":"ContainerStarted","Data":"5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a"} Nov 26 13:39:31 crc kubenswrapper[4747]: I1126 13:39:31.116664 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6ac0196c-4bba-45fc-8222-956473419c32","Type":"ContainerStarted","Data":"eb94d3c35f998b20de33e1b59234d333a2aabe9d1bb90bb82ce562f177c6e588"} Nov 26 13:39:31 crc kubenswrapper[4747]: I1126 13:39:31.117991 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9cb1df98-acc9-4cdd-af43-fe79dd497ede","Type":"ContainerStarted","Data":"1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832"} Nov 26 13:39:31 crc kubenswrapper[4747]: I1126 13:39:31.118125 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9cb1df98-acc9-4cdd-af43-fe79dd497ede","Type":"ContainerStarted","Data":"d47477d9bc2fa7d83b9227f8e64cad19476038576d619bc67b538ccd1dd77c19"} Nov 26 13:39:31 crc kubenswrapper[4747]: I1126 13:39:31.127637 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8","Type":"ContainerStarted","Data":"c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02"} Nov 26 13:39:31 crc kubenswrapper[4747]: I1126 13:39:31.127693 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8","Type":"ContainerStarted","Data":"61316e35b43e859234328d675b1afffe00e013e89200113d8e2ce3be59e92f46"} Nov 26 13:39:31 crc kubenswrapper[4747]: I1126 13:39:31.131042 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"c62715dc-7831-4903-bd3b-65e1d8cbcd58","Type":"ContainerStarted","Data":"38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd"} Nov 26 13:39:31 crc kubenswrapper[4747]: I1126 13:39:31.131091 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"c62715dc-7831-4903-bd3b-65e1d8cbcd58","Type":"ContainerStarted","Data":"b6827cdce6d81db264381c3c098df66d4ac1edcc1a7a6ae4d039d311da13409c"} Nov 26 13:39:32 crc kubenswrapper[4747]: I1126 13:39:32.139156 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"c62715dc-7831-4903-bd3b-65e1d8cbcd58","Type":"ContainerStarted","Data":"f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35"} Nov 26 13:39:32 crc kubenswrapper[4747]: I1126 13:39:32.141322 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8","Type":"ContainerStarted","Data":"7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4"} Nov 26 13:39:32 crc kubenswrapper[4747]: I1126 13:39:32.142675 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6ac0196c-4bba-45fc-8222-956473419c32","Type":"ContainerStarted","Data":"24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04"} Nov 26 13:39:32 crc kubenswrapper[4747]: I1126 13:39:32.143915 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9cb1df98-acc9-4cdd-af43-fe79dd497ede","Type":"ContainerStarted","Data":"d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab"} Nov 26 13:39:32 crc kubenswrapper[4747]: I1126 13:39:32.171262 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=4.171245182 podStartE2EDuration="4.171245182s" podCreationTimestamp="2025-11-26 13:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:39:32.164424752 +0000 UTC m=+1459.150735767" watchObservedRunningTime="2025-11-26 13:39:32.171245182 +0000 UTC m=+1459.157556197" Nov 26 13:39:32 crc kubenswrapper[4747]: I1126 13:39:32.195931 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-2" podStartSLOduration=4.195911816 podStartE2EDuration="4.195911816s" podCreationTimestamp="2025-11-26 13:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:39:32.194304266 +0000 UTC m=+1459.180615291" watchObservedRunningTime="2025-11-26 13:39:32.195911816 +0000 UTC m=+1459.182222831" Nov 26 13:39:32 crc kubenswrapper[4747]: I1126 13:39:32.220888 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-2" podStartSLOduration=4.220868367 podStartE2EDuration="4.220868367s" podCreationTimestamp="2025-11-26 13:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:39:32.21496069 +0000 UTC m=+1459.201271715" watchObservedRunningTime="2025-11-26 13:39:32.220868367 +0000 UTC m=+1459.207179382" Nov 26 13:39:32 crc kubenswrapper[4747]: I1126 13:39:32.243908 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=4.2438863510000004 podStartE2EDuration="4.243886351s" podCreationTimestamp="2025-11-26 13:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:39:32.236113547 +0000 UTC m=+1459.222424572" watchObservedRunningTime="2025-11-26 13:39:32.243886351 +0000 UTC m=+1459.230197366" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.639394 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.639851 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.653813 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.653861 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.665204 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.679143 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.680104 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.698933 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.755867 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.756137 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.781787 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.796395 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.823335 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.823389 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.851293 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:39 crc kubenswrapper[4747]: I1126 13:39:39.860442 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:40 crc kubenswrapper[4747]: I1126 13:39:40.215412 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:40 crc kubenswrapper[4747]: I1126 13:39:40.215825 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:40 crc kubenswrapper[4747]: I1126 13:39:40.215933 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:40 crc kubenswrapper[4747]: I1126 13:39:40.216181 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:40 crc kubenswrapper[4747]: I1126 13:39:40.216209 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:40 crc kubenswrapper[4747]: I1126 13:39:40.216222 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:40 crc kubenswrapper[4747]: I1126 13:39:40.216233 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:40 crc kubenswrapper[4747]: I1126 13:39:40.216244 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.234812 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.235192 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.234813 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.235294 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.234935 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.235421 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.379475 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.379622 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.386426 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.389252 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.438366 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.451391 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.588750 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.591602 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:42 crc kubenswrapper[4747]: I1126 13:39:42.688634 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:43 crc kubenswrapper[4747]: I1126 13:39:43.308929 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:39:43 crc kubenswrapper[4747]: I1126 13:39:43.320860 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:39:43 crc kubenswrapper[4747]: I1126 13:39:43.492529 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:39:43 crc kubenswrapper[4747]: I1126 13:39:43.501794 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:39:44 crc kubenswrapper[4747]: I1126 13:39:44.247434 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerName="glance-httpd" containerID="cri-o://f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35" gracePeriod=30 Nov 26 13:39:44 crc kubenswrapper[4747]: I1126 13:39:44.247393 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerName="glance-log" containerID="cri-o://38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd" gracePeriod=30 Nov 26 13:39:44 crc kubenswrapper[4747]: I1126 13:39:44.254150 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.122:9292/healthcheck\": EOF" Nov 26 13:39:44 crc kubenswrapper[4747]: I1126 13:39:44.254194 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.122:9292/healthcheck\": EOF" Nov 26 13:39:45 crc kubenswrapper[4747]: I1126 13:39:45.255076 4747 generic.go:334] "Generic (PLEG): container finished" podID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerID="38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd" exitCode=143 Nov 26 13:39:45 crc kubenswrapper[4747]: I1126 13:39:45.255287 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"c62715dc-7831-4903-bd3b-65e1d8cbcd58","Type":"ContainerDied","Data":"38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd"} Nov 26 13:39:45 crc kubenswrapper[4747]: I1126 13:39:45.255580 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" containerName="glance-log" containerID="cri-o://1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832" gracePeriod=30 Nov 26 13:39:45 crc kubenswrapper[4747]: I1126 13:39:45.255679 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" containerName="glance-httpd" containerID="cri-o://d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab" gracePeriod=30 Nov 26 13:39:45 crc kubenswrapper[4747]: I1126 13:39:45.255938 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="6ac0196c-4bba-45fc-8222-956473419c32" containerName="glance-log" containerID="cri-o://5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a" gracePeriod=30 Nov 26 13:39:45 crc kubenswrapper[4747]: I1126 13:39:45.256005 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="6ac0196c-4bba-45fc-8222-956473419c32" containerName="glance-httpd" containerID="cri-o://24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04" gracePeriod=30 Nov 26 13:39:45 crc kubenswrapper[4747]: I1126 13:39:45.256449 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" containerName="glance-httpd" containerID="cri-o://7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4" gracePeriod=30 Nov 26 13:39:45 crc kubenswrapper[4747]: I1126 13:39:45.256484 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" containerName="glance-log" containerID="cri-o://c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02" gracePeriod=30 Nov 26 13:39:46 crc kubenswrapper[4747]: I1126 13:39:46.265128 4747 generic.go:334] "Generic (PLEG): container finished" podID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" containerID="c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02" exitCode=143 Nov 26 13:39:46 crc kubenswrapper[4747]: I1126 13:39:46.265224 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8","Type":"ContainerDied","Data":"c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02"} Nov 26 13:39:46 crc kubenswrapper[4747]: I1126 13:39:46.267742 4747 generic.go:334] "Generic (PLEG): container finished" podID="6ac0196c-4bba-45fc-8222-956473419c32" containerID="5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a" exitCode=143 Nov 26 13:39:46 crc kubenswrapper[4747]: I1126 13:39:46.267809 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6ac0196c-4bba-45fc-8222-956473419c32","Type":"ContainerDied","Data":"5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a"} Nov 26 13:39:46 crc kubenswrapper[4747]: I1126 13:39:46.270000 4747 generic.go:334] "Generic (PLEG): container finished" podID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" containerID="1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832" exitCode=143 Nov 26 13:39:46 crc kubenswrapper[4747]: I1126 13:39:46.270034 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9cb1df98-acc9-4cdd-af43-fe79dd497ede","Type":"ContainerDied","Data":"1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832"} Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.045730 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159284 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-sys\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159374 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m6ts\" (UniqueName: \"kubernetes.io/projected/c62715dc-7831-4903-bd3b-65e1d8cbcd58-kube-api-access-2m6ts\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159402 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-config-data\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159395 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-sys" (OuterVolumeSpecName: "sys") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159424 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159462 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-scripts\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159492 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-httpd-run\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159510 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159523 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-iscsi\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159553 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-dev\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159569 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-var-locks-brick\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159598 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-run\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159613 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-nvme\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159660 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-lib-modules\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159675 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-logs\") pod \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\" (UID: \"c62715dc-7831-4903-bd3b-65e1d8cbcd58\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.159934 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.160128 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.160264 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-logs" (OuterVolumeSpecName: "logs") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.160303 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-dev" (OuterVolumeSpecName: "dev") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.160321 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.160338 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-run" (OuterVolumeSpecName: "run") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.160354 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.160369 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.160892 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.165285 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c62715dc-7831-4903-bd3b-65e1d8cbcd58-kube-api-access-2m6ts" (OuterVolumeSpecName: "kube-api-access-2m6ts") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "kube-api-access-2m6ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.167412 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage14-crc" (OuterVolumeSpecName: "glance") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "local-storage14-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.168166 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.174200 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-scripts" (OuterVolumeSpecName: "scripts") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.197278 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-config-data" (OuterVolumeSpecName: "config-data") pod "c62715dc-7831-4903-bd3b-65e1d8cbcd58" (UID: "c62715dc-7831-4903-bd3b-65e1d8cbcd58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261813 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261852 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261861 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261875 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261883 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261892 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261902 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261913 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261921 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261928 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c62715dc-7831-4903-bd3b-65e1d8cbcd58-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261938 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c62715dc-7831-4903-bd3b-65e1d8cbcd58-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261946 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m6ts\" (UniqueName: \"kubernetes.io/projected/c62715dc-7831-4903-bd3b-65e1d8cbcd58-kube-api-access-2m6ts\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.261956 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c62715dc-7831-4903-bd3b-65e1d8cbcd58-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.282824 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.284022 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage14-crc" (UniqueName: "kubernetes.io/local-volume/local-storage14-crc") on node "crc" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.295065 4747 generic.go:334] "Generic (PLEG): container finished" podID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerID="f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35" exitCode=0 Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.295119 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"c62715dc-7831-4903-bd3b-65e1d8cbcd58","Type":"ContainerDied","Data":"f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35"} Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.295158 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.295181 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"c62715dc-7831-4903-bd3b-65e1d8cbcd58","Type":"ContainerDied","Data":"b6827cdce6d81db264381c3c098df66d4ac1edcc1a7a6ae4d039d311da13409c"} Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.295209 4747 scope.go:117] "RemoveContainer" containerID="f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.318742 4747 scope.go:117] "RemoveContainer" containerID="38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.337685 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.346609 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.363452 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.363488 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.373091 4747 scope.go:117] "RemoveContainer" containerID="f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35" Nov 26 13:39:48 crc kubenswrapper[4747]: E1126 13:39:48.373613 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35\": container with ID starting with f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35 not found: ID does not exist" containerID="f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.373666 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35"} err="failed to get container status \"f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35\": rpc error: code = NotFound desc = could not find container \"f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35\": container with ID starting with f086479fec09969fa1e17f81a6b16d64dbf46c43f6784afd0e1f7b5f54201f35 not found: ID does not exist" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.373699 4747 scope.go:117] "RemoveContainer" containerID="38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd" Nov 26 13:39:48 crc kubenswrapper[4747]: E1126 13:39:48.374020 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd\": container with ID starting with 38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd not found: ID does not exist" containerID="38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.374043 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd"} err="failed to get container status \"38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd\": rpc error: code = NotFound desc = could not find container \"38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd\": container with ID starting with 38692589cab6274fb7db31ea364dce1083294bb61d3904185f45566432e8defd not found: ID does not exist" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.728822 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771145 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-config-data\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771202 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771264 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-nvme\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771316 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-lib-modules\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771370 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52fcr\" (UniqueName: \"kubernetes.io/projected/6ac0196c-4bba-45fc-8222-956473419c32-kube-api-access-52fcr\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771410 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-iscsi\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771433 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-run\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771462 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-logs\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771492 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-var-locks-brick\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771513 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-httpd-run\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771538 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-dev\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771582 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-sys\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771647 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-scripts\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.771669 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"6ac0196c-4bba-45fc-8222-956473419c32\" (UID: \"6ac0196c-4bba-45fc-8222-956473419c32\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.772723 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-logs" (OuterVolumeSpecName: "logs") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.772769 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.772792 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-run" (OuterVolumeSpecName: "run") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.772813 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-dev" (OuterVolumeSpecName: "dev") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.772832 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.773137 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.773177 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-sys" (OuterVolumeSpecName: "sys") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.774083 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.774152 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.774804 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.779230 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance-cache") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.779714 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-scripts" (OuterVolumeSpecName: "scripts") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.779748 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac0196c-4bba-45fc-8222-956473419c32-kube-api-access-52fcr" (OuterVolumeSpecName: "kube-api-access-52fcr") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "kube-api-access-52fcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.788368 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.816828 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-config-data" (OuterVolumeSpecName: "config-data") pod "6ac0196c-4bba-45fc-8222-956473419c32" (UID: "6ac0196c-4bba-45fc-8222-956473419c32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.872791 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-sys\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.872875 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-config-data\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.872907 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-scripts\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.872937 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z22c\" (UniqueName: \"kubernetes.io/projected/9cb1df98-acc9-4cdd-af43-fe79dd497ede-kube-api-access-5z22c\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.872965 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-lib-modules\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.872990 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873031 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-run\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873074 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-httpd-run\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873114 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-logs\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873327 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-run" (OuterVolumeSpecName: "run") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873358 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-nvme\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873419 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873436 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-var-locks-brick\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873460 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-dev\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873482 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-iscsi\") pod \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\" (UID: \"9cb1df98-acc9-4cdd-af43-fe79dd497ede\") " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873714 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873962 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-logs" (OuterVolumeSpecName: "logs") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.873997 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874357 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874546 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874561 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cb1df98-acc9-4cdd-af43-fe79dd497ede-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874570 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52fcr\" (UniqueName: \"kubernetes.io/projected/6ac0196c-4bba-45fc-8222-956473419c32-kube-api-access-52fcr\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874581 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874589 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874596 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874605 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874614 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ac0196c-4bba-45fc-8222-956473419c32-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874628 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874636 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874643 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874662 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874670 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac0196c-4bba-45fc-8222-956473419c32-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874682 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874690 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874699 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ac0196c-4bba-45fc-8222-956473419c32-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874707 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.874759 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.875459 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-sys" (OuterVolumeSpecName: "sys") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.875560 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-dev" (OuterVolumeSpecName: "dev") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.875595 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.875845 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.878104 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance-cache") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.882008 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-scripts" (OuterVolumeSpecName: "scripts") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.885458 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb1df98-acc9-4cdd-af43-fe79dd497ede-kube-api-access-5z22c" (OuterVolumeSpecName: "kube-api-access-5z22c") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "kube-api-access-5z22c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.890210 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.891624 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.892149 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.916996 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-config-data" (OuterVolumeSpecName: "config-data") pod "9cb1df98-acc9-4cdd-af43-fe79dd497ede" (UID: "9cb1df98-acc9-4cdd-af43-fe79dd497ede"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.952826 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976731 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976773 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976807 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976818 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976831 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976853 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976863 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9cb1df98-acc9-4cdd-af43-fe79dd497ede-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976873 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976883 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976893 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cb1df98-acc9-4cdd-af43-fe79dd497ede-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976904 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z22c\" (UniqueName: \"kubernetes.io/projected/9cb1df98-acc9-4cdd-af43-fe79dd497ede-kube-api-access-5z22c\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.976920 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.993440 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Nov 26 13:39:48 crc kubenswrapper[4747]: I1126 13:39:48.996814 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077639 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gchb7\" (UniqueName: \"kubernetes.io/projected/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-kube-api-access-gchb7\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077733 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077766 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-config-data\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077809 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077842 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-var-locks-brick\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077880 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-iscsi\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077899 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-run\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077920 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-lib-modules\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077940 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-sys\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077960 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-logs\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.077979 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-dev\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078000 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-httpd-run\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078018 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-nvme\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078038 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-scripts\") pod \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\" (UID: \"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8\") " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078082 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078236 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-dev" (OuterVolumeSpecName: "dev") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078476 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078505 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-sys" (OuterVolumeSpecName: "sys") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078515 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078530 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078541 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078549 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078573 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078593 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078611 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078630 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-run" (OuterVolumeSpecName: "run") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.078723 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-logs" (OuterVolumeSpecName: "logs") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.081695 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.081751 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance-cache") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.082836 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-scripts" (OuterVolumeSpecName: "scripts") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.083241 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-kube-api-access-gchb7" (OuterVolumeSpecName: "kube-api-access-gchb7") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "kube-api-access-gchb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.130898 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-config-data" (OuterVolumeSpecName: "config-data") pod "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" (UID: "b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179539 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179581 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179604 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179620 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179638 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179652 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179664 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179677 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179688 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179701 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179713 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.179728 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gchb7\" (UniqueName: \"kubernetes.io/projected/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8-kube-api-access-gchb7\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.192621 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.198475 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.281562 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.281606 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.305117 4747 generic.go:334] "Generic (PLEG): container finished" podID="6ac0196c-4bba-45fc-8222-956473419c32" containerID="24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04" exitCode=0 Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.305214 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.305222 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6ac0196c-4bba-45fc-8222-956473419c32","Type":"ContainerDied","Data":"24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04"} Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.305335 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6ac0196c-4bba-45fc-8222-956473419c32","Type":"ContainerDied","Data":"eb94d3c35f998b20de33e1b59234d333a2aabe9d1bb90bb82ce562f177c6e588"} Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.305412 4747 scope.go:117] "RemoveContainer" containerID="24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.307366 4747 generic.go:334] "Generic (PLEG): container finished" podID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" containerID="d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab" exitCode=0 Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.307448 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9cb1df98-acc9-4cdd-af43-fe79dd497ede","Type":"ContainerDied","Data":"d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab"} Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.307491 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9cb1df98-acc9-4cdd-af43-fe79dd497ede","Type":"ContainerDied","Data":"d47477d9bc2fa7d83b9227f8e64cad19476038576d619bc67b538ccd1dd77c19"} Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.307660 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.313000 4747 generic.go:334] "Generic (PLEG): container finished" podID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" containerID="7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4" exitCode=0 Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.313072 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8","Type":"ContainerDied","Data":"7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4"} Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.313107 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8","Type":"ContainerDied","Data":"61316e35b43e859234328d675b1afffe00e013e89200113d8e2ce3be59e92f46"} Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.313174 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.340857 4747 scope.go:117] "RemoveContainer" containerID="5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.348716 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.361726 4747 scope.go:117] "RemoveContainer" containerID="24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.362281 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 26 13:39:49 crc kubenswrapper[4747]: E1126 13:39:49.362365 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04\": container with ID starting with 24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04 not found: ID does not exist" containerID="24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.362403 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04"} err="failed to get container status \"24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04\": rpc error: code = NotFound desc = could not find container \"24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04\": container with ID starting with 24eb203c3c2031b508f64cbe4a1b49848eacec8aa315c31ba7187da00e7e1e04 not found: ID does not exist" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.362431 4747 scope.go:117] "RemoveContainer" containerID="5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a" Nov 26 13:39:49 crc kubenswrapper[4747]: E1126 13:39:49.363286 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a\": container with ID starting with 5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a not found: ID does not exist" containerID="5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.363336 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a"} err="failed to get container status \"5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a\": rpc error: code = NotFound desc = could not find container \"5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a\": container with ID starting with 5953573cdc9e0ab6d57b400566f9bb79e2da92bc01c729d15b0d5e34846b651a not found: ID does not exist" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.363375 4747 scope.go:117] "RemoveContainer" containerID="d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.377392 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.377457 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.388537 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.390028 4747 scope.go:117] "RemoveContainer" containerID="1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.396825 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.405415 4747 scope.go:117] "RemoveContainer" containerID="d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab" Nov 26 13:39:49 crc kubenswrapper[4747]: E1126 13:39:49.405937 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab\": container with ID starting with d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab not found: ID does not exist" containerID="d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.405967 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab"} err="failed to get container status \"d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab\": rpc error: code = NotFound desc = could not find container \"d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab\": container with ID starting with d2bdaf41ca39e793eb83c060595e14e4ab68f817216fef42f14bc859ce0d7bab not found: ID does not exist" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.405988 4747 scope.go:117] "RemoveContainer" containerID="1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832" Nov 26 13:39:49 crc kubenswrapper[4747]: E1126 13:39:49.406301 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832\": container with ID starting with 1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832 not found: ID does not exist" containerID="1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.406329 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832"} err="failed to get container status \"1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832\": rpc error: code = NotFound desc = could not find container \"1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832\": container with ID starting with 1c71019a2dfde2445ad16a52bb23a0e408f38d01972f4e5ffbbb367937d87832 not found: ID does not exist" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.406343 4747 scope.go:117] "RemoveContainer" containerID="7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.424472 4747 scope.go:117] "RemoveContainer" containerID="c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.441815 4747 scope.go:117] "RemoveContainer" containerID="7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4" Nov 26 13:39:49 crc kubenswrapper[4747]: E1126 13:39:49.442277 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4\": container with ID starting with 7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4 not found: ID does not exist" containerID="7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.442306 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4"} err="failed to get container status \"7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4\": rpc error: code = NotFound desc = could not find container \"7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4\": container with ID starting with 7b22d9dbb80cfcf11821c961a3633bf97e5157f90e78211601009f0792b4d6e4 not found: ID does not exist" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.442326 4747 scope.go:117] "RemoveContainer" containerID="c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02" Nov 26 13:39:49 crc kubenswrapper[4747]: E1126 13:39:49.442582 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02\": container with ID starting with c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02 not found: ID does not exist" containerID="c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.442615 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02"} err="failed to get container status \"c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02\": rpc error: code = NotFound desc = could not find container \"c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02\": container with ID starting with c358d633da19c967866bf0745f480b902e89afdf848181874694d2831633cb02 not found: ID does not exist" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.807637 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac0196c-4bba-45fc-8222-956473419c32" path="/var/lib/kubelet/pods/6ac0196c-4bba-45fc-8222-956473419c32/volumes" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.808641 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" path="/var/lib/kubelet/pods/9cb1df98-acc9-4cdd-af43-fe79dd497ede/volumes" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.809667 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" path="/var/lib/kubelet/pods/b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8/volumes" Nov 26 13:39:49 crc kubenswrapper[4747]: I1126 13:39:49.811132 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" path="/var/lib/kubelet/pods/c62715dc-7831-4903-bd3b-65e1d8cbcd58/volumes" Nov 26 13:39:50 crc kubenswrapper[4747]: I1126 13:39:50.487870 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:39:50 crc kubenswrapper[4747]: I1126 13:39:50.488123 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" containerName="glance-log" containerID="cri-o://8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c" gracePeriod=30 Nov 26 13:39:50 crc kubenswrapper[4747]: I1126 13:39:50.488186 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" containerName="glance-httpd" containerID="cri-o://311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20" gracePeriod=30 Nov 26 13:39:50 crc kubenswrapper[4747]: I1126 13:39:50.735324 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:50 crc kubenswrapper[4747]: I1126 13:39:50.735604 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="17a955ae-240d-4709-94a7-474f45be1d46" containerName="glance-log" containerID="cri-o://ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f" gracePeriod=30 Nov 26 13:39:50 crc kubenswrapper[4747]: I1126 13:39:50.735717 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="17a955ae-240d-4709-94a7-474f45be1d46" containerName="glance-httpd" containerID="cri-o://898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026" gracePeriod=30 Nov 26 13:39:51 crc kubenswrapper[4747]: I1126 13:39:51.343736 4747 generic.go:334] "Generic (PLEG): container finished" podID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" containerID="8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c" exitCode=143 Nov 26 13:39:51 crc kubenswrapper[4747]: I1126 13:39:51.343919 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905","Type":"ContainerDied","Data":"8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c"} Nov 26 13:39:51 crc kubenswrapper[4747]: I1126 13:39:51.346398 4747 generic.go:334] "Generic (PLEG): container finished" podID="17a955ae-240d-4709-94a7-474f45be1d46" containerID="ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f" exitCode=143 Nov 26 13:39:51 crc kubenswrapper[4747]: I1126 13:39:51.346428 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"17a955ae-240d-4709-94a7-474f45be1d46","Type":"ContainerDied","Data":"ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f"} Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.016421 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051580 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-var-locks-brick\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051626 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-config-data\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051669 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-sys\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051703 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051708 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051737 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-run\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051758 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-nvme\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051793 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-iscsi\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051808 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-lib-modules\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051831 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scfk7\" (UniqueName: \"kubernetes.io/projected/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-kube-api-access-scfk7\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051859 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-scripts\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051891 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-httpd-run\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051910 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-logs\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051926 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-dev\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051960 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\" (UID: \"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.052269 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.051804 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-sys" (OuterVolumeSpecName: "sys") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.054971 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-run" (OuterVolumeSpecName: "run") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.054999 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.055023 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.055039 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.055211 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-dev" (OuterVolumeSpecName: "dev") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.055593 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-logs" (OuterVolumeSpecName: "logs") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.055749 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.060739 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-kube-api-access-scfk7" (OuterVolumeSpecName: "kube-api-access-scfk7") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "kube-api-access-scfk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.061210 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.061590 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-scripts" (OuterVolumeSpecName: "scripts") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.075112 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.095890 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-config-data" (OuterVolumeSpecName: "config-data") pod "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" (UID: "4e5046b1-a2c2-47f9-bb6f-e5c2a861d905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154019 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154088 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154104 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154113 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154123 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154130 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154139 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scfk7\" (UniqueName: \"kubernetes.io/projected/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-kube-api-access-scfk7\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154147 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154156 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154164 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154171 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154183 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.154191 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.167835 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.167976 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.169239 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255216 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-run\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255589 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-scripts\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255613 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-config-data\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255631 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-sys\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255320 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-run" (OuterVolumeSpecName: "run") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255661 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-iscsi\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255690 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255715 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-httpd-run\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255757 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-lib-modules\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255816 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcbfk\" (UniqueName: \"kubernetes.io/projected/17a955ae-240d-4709-94a7-474f45be1d46-kube-api-access-xcbfk\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255843 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255867 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-var-locks-brick\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255927 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-nvme\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255968 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-logs\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256009 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256028 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-dev\") pod \"17a955ae-240d-4709-94a7-474f45be1d46\" (UID: \"17a955ae-240d-4709-94a7-474f45be1d46\") " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255720 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-sys" (OuterVolumeSpecName: "sys") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.255993 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256094 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256113 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256133 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256394 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256409 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256418 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256427 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256435 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256443 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256452 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256460 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256468 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256493 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-dev" (OuterVolumeSpecName: "dev") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.256653 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-logs" (OuterVolumeSpecName: "logs") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.258615 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-scripts" (OuterVolumeSpecName: "scripts") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.258943 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.259074 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a955ae-240d-4709-94a7-474f45be1d46-kube-api-access-xcbfk" (OuterVolumeSpecName: "kube-api-access-xcbfk") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "kube-api-access-xcbfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.259855 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.289555 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-config-data" (OuterVolumeSpecName: "config-data") pod "17a955ae-240d-4709-94a7-474f45be1d46" (UID: "17a955ae-240d-4709-94a7-474f45be1d46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.357541 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.357573 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a955ae-240d-4709-94a7-474f45be1d46-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.357584 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcbfk\" (UniqueName: \"kubernetes.io/projected/17a955ae-240d-4709-94a7-474f45be1d46-kube-api-access-xcbfk\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.357621 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.357636 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17a955ae-240d-4709-94a7-474f45be1d46-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.357653 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.357664 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/17a955ae-240d-4709-94a7-474f45be1d46-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.380228 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.382996 4747 generic.go:334] "Generic (PLEG): container finished" podID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" containerID="311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20" exitCode=0 Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.383041 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905","Type":"ContainerDied","Data":"311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20"} Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.383122 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"4e5046b1-a2c2-47f9-bb6f-e5c2a861d905","Type":"ContainerDied","Data":"3685d22bbc37e203433cbdce05090507786bbfdf8f3cb59faf534bf13dd86e45"} Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.383145 4747 scope.go:117] "RemoveContainer" containerID="311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.383318 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.383586 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.385125 4747 generic.go:334] "Generic (PLEG): container finished" podID="17a955ae-240d-4709-94a7-474f45be1d46" containerID="898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026" exitCode=0 Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.385163 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"17a955ae-240d-4709-94a7-474f45be1d46","Type":"ContainerDied","Data":"898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026"} Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.385193 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"17a955ae-240d-4709-94a7-474f45be1d46","Type":"ContainerDied","Data":"2b875406b551519d194a1def75364768d5d847d10705bfc19ab812bb5ea86eb5"} Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.385209 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.405514 4747 scope.go:117] "RemoveContainer" containerID="8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.418650 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.424820 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.433863 4747 scope.go:117] "RemoveContainer" containerID="311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20" Nov 26 13:39:54 crc kubenswrapper[4747]: E1126 13:39:54.435659 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20\": container with ID starting with 311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20 not found: ID does not exist" containerID="311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.435692 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20"} err="failed to get container status \"311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20\": rpc error: code = NotFound desc = could not find container \"311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20\": container with ID starting with 311546b8d607cbb3c92b05b63dc216733d6242890b59a62eec3966ee3305ef20 not found: ID does not exist" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.435712 4747 scope.go:117] "RemoveContainer" containerID="8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c" Nov 26 13:39:54 crc kubenswrapper[4747]: E1126 13:39:54.436261 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c\": container with ID starting with 8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c not found: ID does not exist" containerID="8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.436312 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c"} err="failed to get container status \"8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c\": rpc error: code = NotFound desc = could not find container \"8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c\": container with ID starting with 8e406c4e4894cfbe29e9cb67698ee097c3acc06112d26de05462a7e1bf253b0c not found: ID does not exist" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.436345 4747 scope.go:117] "RemoveContainer" containerID="898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.437485 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.443603 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.457906 4747 scope.go:117] "RemoveContainer" containerID="ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.458897 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.458929 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.474466 4747 scope.go:117] "RemoveContainer" containerID="898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026" Nov 26 13:39:54 crc kubenswrapper[4747]: E1126 13:39:54.474849 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026\": container with ID starting with 898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026 not found: ID does not exist" containerID="898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.475016 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026"} err="failed to get container status \"898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026\": rpc error: code = NotFound desc = could not find container \"898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026\": container with ID starting with 898bc1fff1b9f94a1c1cee0b5b4c6d3946d9eb2d14567d0f5bdfd502d94a3026 not found: ID does not exist" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.475195 4747 scope.go:117] "RemoveContainer" containerID="ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f" Nov 26 13:39:54 crc kubenswrapper[4747]: E1126 13:39:54.476276 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f\": container with ID starting with ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f not found: ID does not exist" containerID="ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f" Nov 26 13:39:54 crc kubenswrapper[4747]: I1126 13:39:54.476305 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f"} err="failed to get container status \"ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f\": rpc error: code = NotFound desc = could not find container \"ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f\": container with ID starting with ce68764674e7afbb7f3daf47d0a76fe8eb611c88ccd5e08710ca1b838d2cd72f not found: ID does not exist" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.059604 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-sh7fn"] Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.064482 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-sh7fn"] Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.094683 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance8f99-account-delete-wttmq"] Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.094995 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095016 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095030 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095038 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095066 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095077 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095094 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a955ae-240d-4709-94a7-474f45be1d46" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095102 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a955ae-240d-4709-94a7-474f45be1d46" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095113 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095122 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095133 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095140 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095149 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac0196c-4bba-45fc-8222-956473419c32" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095157 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac0196c-4bba-45fc-8222-956473419c32" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095176 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac0196c-4bba-45fc-8222-956473419c32" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095184 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac0196c-4bba-45fc-8222-956473419c32" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095193 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a955ae-240d-4709-94a7-474f45be1d46" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095199 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a955ae-240d-4709-94a7-474f45be1d46" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095214 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095220 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095230 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095236 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: E1126 13:39:55.095245 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095252 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095393 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095406 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095418 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095428 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb1df98-acc9-4cdd-af43-fe79dd497ede" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095439 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095448 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac0196c-4bba-45fc-8222-956473419c32" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095457 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac0196c-4bba-45fc-8222-956473419c32" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095467 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095477 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a955ae-240d-4709-94a7-474f45be1d46" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095486 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a955ae-240d-4709-94a7-474f45be1d46" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095499 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b45e02-39f8-40ef-9d98-cf4ca2e7a7b8" containerName="glance-httpd" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.095512 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="c62715dc-7831-4903-bd3b-65e1d8cbcd58" containerName="glance-log" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.096080 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.103167 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance8f99-account-delete-wttmq"] Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.167108 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2bc24db-53de-493c-bd1d-c08afd7445f7-operator-scripts\") pod \"glance8f99-account-delete-wttmq\" (UID: \"e2bc24db-53de-493c-bd1d-c08afd7445f7\") " pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.167349 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-926hv\" (UniqueName: \"kubernetes.io/projected/e2bc24db-53de-493c-bd1d-c08afd7445f7-kube-api-access-926hv\") pod \"glance8f99-account-delete-wttmq\" (UID: \"e2bc24db-53de-493c-bd1d-c08afd7445f7\") " pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.267776 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-926hv\" (UniqueName: \"kubernetes.io/projected/e2bc24db-53de-493c-bd1d-c08afd7445f7-kube-api-access-926hv\") pod \"glance8f99-account-delete-wttmq\" (UID: \"e2bc24db-53de-493c-bd1d-c08afd7445f7\") " pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.267824 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2bc24db-53de-493c-bd1d-c08afd7445f7-operator-scripts\") pod \"glance8f99-account-delete-wttmq\" (UID: \"e2bc24db-53de-493c-bd1d-c08afd7445f7\") " pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.268598 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2bc24db-53de-493c-bd1d-c08afd7445f7-operator-scripts\") pod \"glance8f99-account-delete-wttmq\" (UID: \"e2bc24db-53de-493c-bd1d-c08afd7445f7\") " pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.301752 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-926hv\" (UniqueName: \"kubernetes.io/projected/e2bc24db-53de-493c-bd1d-c08afd7445f7-kube-api-access-926hv\") pod \"glance8f99-account-delete-wttmq\" (UID: \"e2bc24db-53de-493c-bd1d-c08afd7445f7\") " pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.418384 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.808586 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e61bfb-6cd0-4c8a-abbf-b836a6abb976" path="/var/lib/kubelet/pods/02e61bfb-6cd0-4c8a-abbf-b836a6abb976/volumes" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.810227 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a955ae-240d-4709-94a7-474f45be1d46" path="/var/lib/kubelet/pods/17a955ae-240d-4709-94a7-474f45be1d46/volumes" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.811565 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e5046b1-a2c2-47f9-bb6f-e5c2a861d905" path="/var/lib/kubelet/pods/4e5046b1-a2c2-47f9-bb6f-e5c2a861d905/volumes" Nov 26 13:39:55 crc kubenswrapper[4747]: I1126 13:39:55.840430 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance8f99-account-delete-wttmq"] Nov 26 13:39:56 crc kubenswrapper[4747]: I1126 13:39:56.413647 4747 generic.go:334] "Generic (PLEG): container finished" podID="e2bc24db-53de-493c-bd1d-c08afd7445f7" containerID="9efb264505d248ab1f95e981c3cdd12853ff1c17741c062bee78261d473864f7" exitCode=0 Nov 26 13:39:56 crc kubenswrapper[4747]: I1126 13:39:56.413703 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" event={"ID":"e2bc24db-53de-493c-bd1d-c08afd7445f7","Type":"ContainerDied","Data":"9efb264505d248ab1f95e981c3cdd12853ff1c17741c062bee78261d473864f7"} Nov 26 13:39:56 crc kubenswrapper[4747]: I1126 13:39:56.413733 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" event={"ID":"e2bc24db-53de-493c-bd1d-c08afd7445f7","Type":"ContainerStarted","Data":"bb8e0bbacd184293c8d7edc4f1d5c20c43fbf463c105567aca2a682e78635020"} Nov 26 13:39:57 crc kubenswrapper[4747]: I1126 13:39:57.665755 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" Nov 26 13:39:57 crc kubenswrapper[4747]: I1126 13:39:57.704784 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2bc24db-53de-493c-bd1d-c08afd7445f7-operator-scripts\") pod \"e2bc24db-53de-493c-bd1d-c08afd7445f7\" (UID: \"e2bc24db-53de-493c-bd1d-c08afd7445f7\") " Nov 26 13:39:57 crc kubenswrapper[4747]: I1126 13:39:57.704884 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-926hv\" (UniqueName: \"kubernetes.io/projected/e2bc24db-53de-493c-bd1d-c08afd7445f7-kube-api-access-926hv\") pod \"e2bc24db-53de-493c-bd1d-c08afd7445f7\" (UID: \"e2bc24db-53de-493c-bd1d-c08afd7445f7\") " Nov 26 13:39:57 crc kubenswrapper[4747]: I1126 13:39:57.705771 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2bc24db-53de-493c-bd1d-c08afd7445f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2bc24db-53de-493c-bd1d-c08afd7445f7" (UID: "e2bc24db-53de-493c-bd1d-c08afd7445f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:39:57 crc kubenswrapper[4747]: I1126 13:39:57.712280 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bc24db-53de-493c-bd1d-c08afd7445f7-kube-api-access-926hv" (OuterVolumeSpecName: "kube-api-access-926hv") pod "e2bc24db-53de-493c-bd1d-c08afd7445f7" (UID: "e2bc24db-53de-493c-bd1d-c08afd7445f7"). InnerVolumeSpecName "kube-api-access-926hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:39:57 crc kubenswrapper[4747]: I1126 13:39:57.806957 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2bc24db-53de-493c-bd1d-c08afd7445f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:57 crc kubenswrapper[4747]: I1126 13:39:57.806992 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-926hv\" (UniqueName: \"kubernetes.io/projected/e2bc24db-53de-493c-bd1d-c08afd7445f7-kube-api-access-926hv\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:58 crc kubenswrapper[4747]: I1126 13:39:58.428545 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" event={"ID":"e2bc24db-53de-493c-bd1d-c08afd7445f7","Type":"ContainerDied","Data":"bb8e0bbacd184293c8d7edc4f1d5c20c43fbf463c105567aca2a682e78635020"} Nov 26 13:39:58 crc kubenswrapper[4747]: I1126 13:39:58.428911 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb8e0bbacd184293c8d7edc4f1d5c20c43fbf463c105567aca2a682e78635020" Nov 26 13:39:58 crc kubenswrapper[4747]: I1126 13:39:58.428586 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance8f99-account-delete-wttmq" Nov 26 13:40:00 crc kubenswrapper[4747]: I1126 13:40:00.116504 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-f4w87"] Nov 26 13:40:00 crc kubenswrapper[4747]: I1126 13:40:00.124520 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-f4w87"] Nov 26 13:40:00 crc kubenswrapper[4747]: I1126 13:40:00.139006 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-8f99-account-create-update-7tj9v"] Nov 26 13:40:00 crc kubenswrapper[4747]: I1126 13:40:00.146032 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance8f99-account-delete-wttmq"] Nov 26 13:40:00 crc kubenswrapper[4747]: I1126 13:40:00.151992 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-8f99-account-create-update-7tj9v"] Nov 26 13:40:00 crc kubenswrapper[4747]: I1126 13:40:00.157150 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance8f99-account-delete-wttmq"] Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.807148 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a88030-95bf-4b2b-814b-da3925664d66" path="/var/lib/kubelet/pods/01a88030-95bf-4b2b-814b-da3925664d66/volumes" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.808205 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4be269-e8c9-435b-8a1b-7023727afd2f" path="/var/lib/kubelet/pods/ca4be269-e8c9-435b-8a1b-7023727afd2f/volumes" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.808736 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2bc24db-53de-493c-bd1d-c08afd7445f7" path="/var/lib/kubelet/pods/e2bc24db-53de-493c-bd1d-c08afd7445f7/volumes" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.946603 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-p6d2d"] Nov 26 13:40:01 crc kubenswrapper[4747]: E1126 13:40:01.946947 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bc24db-53de-493c-bd1d-c08afd7445f7" containerName="mariadb-account-delete" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.946970 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bc24db-53de-493c-bd1d-c08afd7445f7" containerName="mariadb-account-delete" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.947130 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bc24db-53de-493c-bd1d-c08afd7445f7" containerName="mariadb-account-delete" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.947588 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-p6d2d" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.957323 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs"] Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.958470 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.962335 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.964625 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb6bd\" (UniqueName: \"kubernetes.io/projected/896a016c-4ec2-4a99-9927-1e3464105999-kube-api-access-cb6bd\") pod \"glance-db-create-p6d2d\" (UID: \"896a016c-4ec2-4a99-9927-1e3464105999\") " pod="glance-kuttl-tests/glance-db-create-p6d2d" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.964739 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e58e20b-6846-447e-95e9-2d81f8688d6f-operator-scripts\") pod \"glance-2a2e-account-create-update-w4fvs\" (UID: \"8e58e20b-6846-447e-95e9-2d81f8688d6f\") " pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.964786 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/896a016c-4ec2-4a99-9927-1e3464105999-operator-scripts\") pod \"glance-db-create-p6d2d\" (UID: \"896a016c-4ec2-4a99-9927-1e3464105999\") " pod="glance-kuttl-tests/glance-db-create-p6d2d" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.964937 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8pvp\" (UniqueName: \"kubernetes.io/projected/8e58e20b-6846-447e-95e9-2d81f8688d6f-kube-api-access-n8pvp\") pod \"glance-2a2e-account-create-update-w4fvs\" (UID: \"8e58e20b-6846-447e-95e9-2d81f8688d6f\") " pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.969088 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-p6d2d"] Nov 26 13:40:01 crc kubenswrapper[4747]: I1126 13:40:01.980254 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs"] Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.004240 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.005323 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.007618 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.007921 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-zkhw9" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.008086 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.008371 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.014210 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.068930 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb6bd\" (UniqueName: \"kubernetes.io/projected/896a016c-4ec2-4a99-9927-1e3464105999-kube-api-access-cb6bd\") pod \"glance-db-create-p6d2d\" (UID: \"896a016c-4ec2-4a99-9927-1e3464105999\") " pod="glance-kuttl-tests/glance-db-create-p6d2d" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.069006 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e58e20b-6846-447e-95e9-2d81f8688d6f-operator-scripts\") pod \"glance-2a2e-account-create-update-w4fvs\" (UID: \"8e58e20b-6846-447e-95e9-2d81f8688d6f\") " pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.069033 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/896a016c-4ec2-4a99-9927-1e3464105999-operator-scripts\") pod \"glance-db-create-p6d2d\" (UID: \"896a016c-4ec2-4a99-9927-1e3464105999\") " pod="glance-kuttl-tests/glance-db-create-p6d2d" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.069934 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/896a016c-4ec2-4a99-9927-1e3464105999-operator-scripts\") pod \"glance-db-create-p6d2d\" (UID: \"896a016c-4ec2-4a99-9927-1e3464105999\") " pod="glance-kuttl-tests/glance-db-create-p6d2d" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.070039 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8pvp\" (UniqueName: \"kubernetes.io/projected/8e58e20b-6846-447e-95e9-2d81f8688d6f-kube-api-access-n8pvp\") pod \"glance-2a2e-account-create-update-w4fvs\" (UID: \"8e58e20b-6846-447e-95e9-2d81f8688d6f\") " pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.071463 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e58e20b-6846-447e-95e9-2d81f8688d6f-operator-scripts\") pod \"glance-2a2e-account-create-update-w4fvs\" (UID: \"8e58e20b-6846-447e-95e9-2d81f8688d6f\") " pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.099139 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb6bd\" (UniqueName: \"kubernetes.io/projected/896a016c-4ec2-4a99-9927-1e3464105999-kube-api-access-cb6bd\") pod \"glance-db-create-p6d2d\" (UID: \"896a016c-4ec2-4a99-9927-1e3464105999\") " pod="glance-kuttl-tests/glance-db-create-p6d2d" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.107755 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8pvp\" (UniqueName: \"kubernetes.io/projected/8e58e20b-6846-447e-95e9-2d81f8688d6f-kube-api-access-n8pvp\") pod \"glance-2a2e-account-create-update-w4fvs\" (UID: \"8e58e20b-6846-447e-95e9-2d81f8688d6f\") " pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.174987 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/48756a07-259f-47bf-9088-0364f426fb71-openstack-config-secret\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.175044 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-config\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.175110 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdphd\" (UniqueName: \"kubernetes.io/projected/48756a07-259f-47bf-9088-0364f426fb71-kube-api-access-zdphd\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.175137 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-scripts\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.276575 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/48756a07-259f-47bf-9088-0364f426fb71-openstack-config-secret\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.276663 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-config\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.276710 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdphd\" (UniqueName: \"kubernetes.io/projected/48756a07-259f-47bf-9088-0364f426fb71-kube-api-access-zdphd\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.276737 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-scripts\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.277578 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-config\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.277685 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-scripts\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.286695 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/48756a07-259f-47bf-9088-0364f426fb71-openstack-config-secret\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.290541 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-p6d2d" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.296230 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdphd\" (UniqueName: \"kubernetes.io/projected/48756a07-259f-47bf-9088-0364f426fb71-kube-api-access-zdphd\") pod \"openstackclient\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.304964 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.327994 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.827143 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 26 13:40:02 crc kubenswrapper[4747]: W1126 13:40:02.830377 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48756a07_259f_47bf_9088_0364f426fb71.slice/crio-0937c56297824e8e44145aebeb1544a5e917f880e1866f633721b930a8b63908 WatchSource:0}: Error finding container 0937c56297824e8e44145aebeb1544a5e917f880e1866f633721b930a8b63908: Status 404 returned error can't find the container with id 0937c56297824e8e44145aebeb1544a5e917f880e1866f633721b930a8b63908 Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.836982 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs"] Nov 26 13:40:02 crc kubenswrapper[4747]: I1126 13:40:02.845897 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-p6d2d"] Nov 26 13:40:02 crc kubenswrapper[4747]: W1126 13:40:02.857545 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod896a016c_4ec2_4a99_9927_1e3464105999.slice/crio-e3b4b4d808f733ce1346bf906bc841b0140331925f682fd32c29eccf06846bb4 WatchSource:0}: Error finding container e3b4b4d808f733ce1346bf906bc841b0140331925f682fd32c29eccf06846bb4: Status 404 returned error can't find the container with id e3b4b4d808f733ce1346bf906bc841b0140331925f682fd32c29eccf06846bb4 Nov 26 13:40:03 crc kubenswrapper[4747]: I1126 13:40:03.417151 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:40:03 crc kubenswrapper[4747]: I1126 13:40:03.417218 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:40:03 crc kubenswrapper[4747]: I1126 13:40:03.498546 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" event={"ID":"8e58e20b-6846-447e-95e9-2d81f8688d6f","Type":"ContainerStarted","Data":"551ce22e18f0dd11863a0a41ff624039db36caa2bea8bd0aec4e28124c556542"} Nov 26 13:40:03 crc kubenswrapper[4747]: I1126 13:40:03.498603 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" event={"ID":"8e58e20b-6846-447e-95e9-2d81f8688d6f","Type":"ContainerStarted","Data":"af3f8e7d4c141b8914115438182e940bb814a59557a5e33625e76a12f24e9dd2"} Nov 26 13:40:03 crc kubenswrapper[4747]: I1126 13:40:03.500434 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"48756a07-259f-47bf-9088-0364f426fb71","Type":"ContainerStarted","Data":"0937c56297824e8e44145aebeb1544a5e917f880e1866f633721b930a8b63908"} Nov 26 13:40:03 crc kubenswrapper[4747]: I1126 13:40:03.502029 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-p6d2d" event={"ID":"896a016c-4ec2-4a99-9927-1e3464105999","Type":"ContainerStarted","Data":"503a5fb78966ae653a1187f7a06fe171c8bb92e92bef16bf88405bd387d3c8e3"} Nov 26 13:40:03 crc kubenswrapper[4747]: I1126 13:40:03.502072 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-p6d2d" event={"ID":"896a016c-4ec2-4a99-9927-1e3464105999","Type":"ContainerStarted","Data":"e3b4b4d808f733ce1346bf906bc841b0140331925f682fd32c29eccf06846bb4"} Nov 26 13:40:03 crc kubenswrapper[4747]: I1126 13:40:03.519632 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" podStartSLOduration=2.519604595 podStartE2EDuration="2.519604595s" podCreationTimestamp="2025-11-26 13:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:40:03.511978314 +0000 UTC m=+1490.498289339" watchObservedRunningTime="2025-11-26 13:40:03.519604595 +0000 UTC m=+1490.505915650" Nov 26 13:40:03 crc kubenswrapper[4747]: I1126 13:40:03.536252 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-create-p6d2d" podStartSLOduration=2.53623406 podStartE2EDuration="2.53623406s" podCreationTimestamp="2025-11-26 13:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:40:03.526246041 +0000 UTC m=+1490.512557056" watchObservedRunningTime="2025-11-26 13:40:03.53623406 +0000 UTC m=+1490.522545105" Nov 26 13:40:04 crc kubenswrapper[4747]: I1126 13:40:04.511137 4747 generic.go:334] "Generic (PLEG): container finished" podID="896a016c-4ec2-4a99-9927-1e3464105999" containerID="503a5fb78966ae653a1187f7a06fe171c8bb92e92bef16bf88405bd387d3c8e3" exitCode=0 Nov 26 13:40:04 crc kubenswrapper[4747]: I1126 13:40:04.511177 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-p6d2d" event={"ID":"896a016c-4ec2-4a99-9927-1e3464105999","Type":"ContainerDied","Data":"503a5fb78966ae653a1187f7a06fe171c8bb92e92bef16bf88405bd387d3c8e3"} Nov 26 13:40:05 crc kubenswrapper[4747]: I1126 13:40:05.522814 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" event={"ID":"8e58e20b-6846-447e-95e9-2d81f8688d6f","Type":"ContainerDied","Data":"551ce22e18f0dd11863a0a41ff624039db36caa2bea8bd0aec4e28124c556542"} Nov 26 13:40:05 crc kubenswrapper[4747]: I1126 13:40:05.522751 4747 generic.go:334] "Generic (PLEG): container finished" podID="8e58e20b-6846-447e-95e9-2d81f8688d6f" containerID="551ce22e18f0dd11863a0a41ff624039db36caa2bea8bd0aec4e28124c556542" exitCode=0 Nov 26 13:40:05 crc kubenswrapper[4747]: I1126 13:40:05.822157 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-p6d2d" Nov 26 13:40:05 crc kubenswrapper[4747]: I1126 13:40:05.873429 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/896a016c-4ec2-4a99-9927-1e3464105999-operator-scripts\") pod \"896a016c-4ec2-4a99-9927-1e3464105999\" (UID: \"896a016c-4ec2-4a99-9927-1e3464105999\") " Nov 26 13:40:05 crc kubenswrapper[4747]: I1126 13:40:05.873517 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb6bd\" (UniqueName: \"kubernetes.io/projected/896a016c-4ec2-4a99-9927-1e3464105999-kube-api-access-cb6bd\") pod \"896a016c-4ec2-4a99-9927-1e3464105999\" (UID: \"896a016c-4ec2-4a99-9927-1e3464105999\") " Nov 26 13:40:05 crc kubenswrapper[4747]: I1126 13:40:05.874314 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/896a016c-4ec2-4a99-9927-1e3464105999-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "896a016c-4ec2-4a99-9927-1e3464105999" (UID: "896a016c-4ec2-4a99-9927-1e3464105999"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:40:05 crc kubenswrapper[4747]: I1126 13:40:05.878736 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896a016c-4ec2-4a99-9927-1e3464105999-kube-api-access-cb6bd" (OuterVolumeSpecName: "kube-api-access-cb6bd") pod "896a016c-4ec2-4a99-9927-1e3464105999" (UID: "896a016c-4ec2-4a99-9927-1e3464105999"). InnerVolumeSpecName "kube-api-access-cb6bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:40:05 crc kubenswrapper[4747]: I1126 13:40:05.975555 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/896a016c-4ec2-4a99-9927-1e3464105999-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:05 crc kubenswrapper[4747]: I1126 13:40:05.975592 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb6bd\" (UniqueName: \"kubernetes.io/projected/896a016c-4ec2-4a99-9927-1e3464105999-kube-api-access-cb6bd\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:06 crc kubenswrapper[4747]: I1126 13:40:06.537340 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-p6d2d" Nov 26 13:40:06 crc kubenswrapper[4747]: I1126 13:40:06.538278 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-p6d2d" event={"ID":"896a016c-4ec2-4a99-9927-1e3464105999","Type":"ContainerDied","Data":"e3b4b4d808f733ce1346bf906bc841b0140331925f682fd32c29eccf06846bb4"} Nov 26 13:40:06 crc kubenswrapper[4747]: I1126 13:40:06.538355 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b4b4d808f733ce1346bf906bc841b0140331925f682fd32c29eccf06846bb4" Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.007663 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.061523 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e58e20b-6846-447e-95e9-2d81f8688d6f-operator-scripts\") pod \"8e58e20b-6846-447e-95e9-2d81f8688d6f\" (UID: \"8e58e20b-6846-447e-95e9-2d81f8688d6f\") " Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.061599 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8pvp\" (UniqueName: \"kubernetes.io/projected/8e58e20b-6846-447e-95e9-2d81f8688d6f-kube-api-access-n8pvp\") pod \"8e58e20b-6846-447e-95e9-2d81f8688d6f\" (UID: \"8e58e20b-6846-447e-95e9-2d81f8688d6f\") " Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.063649 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e58e20b-6846-447e-95e9-2d81f8688d6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e58e20b-6846-447e-95e9-2d81f8688d6f" (UID: "8e58e20b-6846-447e-95e9-2d81f8688d6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.067011 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e58e20b-6846-447e-95e9-2d81f8688d6f-kube-api-access-n8pvp" (OuterVolumeSpecName: "kube-api-access-n8pvp") pod "8e58e20b-6846-447e-95e9-2d81f8688d6f" (UID: "8e58e20b-6846-447e-95e9-2d81f8688d6f"). InnerVolumeSpecName "kube-api-access-n8pvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.163650 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e58e20b-6846-447e-95e9-2d81f8688d6f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.163705 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8pvp\" (UniqueName: \"kubernetes.io/projected/8e58e20b-6846-447e-95e9-2d81f8688d6f-kube-api-access-n8pvp\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.598522 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" event={"ID":"8e58e20b-6846-447e-95e9-2d81f8688d6f","Type":"ContainerDied","Data":"af3f8e7d4c141b8914115438182e940bb814a59557a5e33625e76a12f24e9dd2"} Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.598799 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af3f8e7d4c141b8914115438182e940bb814a59557a5e33625e76a12f24e9dd2" Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.598536 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs" Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.599899 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"48756a07-259f-47bf-9088-0364f426fb71","Type":"ContainerStarted","Data":"c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee"} Nov 26 13:40:11 crc kubenswrapper[4747]: I1126 13:40:11.613139 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=2.426444857 podStartE2EDuration="10.613117688s" podCreationTimestamp="2025-11-26 13:40:01 +0000 UTC" firstStartedPulling="2025-11-26 13:40:02.849710163 +0000 UTC m=+1489.836021178" lastFinishedPulling="2025-11-26 13:40:11.036382984 +0000 UTC m=+1498.022694009" observedRunningTime="2025-11-26 13:40:11.612305968 +0000 UTC m=+1498.598616993" watchObservedRunningTime="2025-11-26 13:40:11.613117688 +0000 UTC m=+1498.599428733" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.217564 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-276p7"] Nov 26 13:40:12 crc kubenswrapper[4747]: E1126 13:40:12.217906 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896a016c-4ec2-4a99-9927-1e3464105999" containerName="mariadb-database-create" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.217917 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="896a016c-4ec2-4a99-9927-1e3464105999" containerName="mariadb-database-create" Nov 26 13:40:12 crc kubenswrapper[4747]: E1126 13:40:12.217930 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e58e20b-6846-447e-95e9-2d81f8688d6f" containerName="mariadb-account-create-update" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.217936 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e58e20b-6846-447e-95e9-2d81f8688d6f" containerName="mariadb-account-create-update" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.218118 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e58e20b-6846-447e-95e9-2d81f8688d6f" containerName="mariadb-account-create-update" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.218136 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="896a016c-4ec2-4a99-9927-1e3464105999" containerName="mariadb-database-create" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.218586 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.221102 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.221472 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-hddh2" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.256262 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-276p7"] Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.282725 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-db-sync-config-data\") pod \"glance-db-sync-276p7\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.282831 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnrvh\" (UniqueName: \"kubernetes.io/projected/4138a010-9438-41a8-8113-3067d2e885b4-kube-api-access-cnrvh\") pod \"glance-db-sync-276p7\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.282921 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-config-data\") pod \"glance-db-sync-276p7\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.383995 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-config-data\") pod \"glance-db-sync-276p7\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.384111 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-db-sync-config-data\") pod \"glance-db-sync-276p7\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.384142 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnrvh\" (UniqueName: \"kubernetes.io/projected/4138a010-9438-41a8-8113-3067d2e885b4-kube-api-access-cnrvh\") pod \"glance-db-sync-276p7\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.388730 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-db-sync-config-data\") pod \"glance-db-sync-276p7\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.395525 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-config-data\") pod \"glance-db-sync-276p7\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.400169 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnrvh\" (UniqueName: \"kubernetes.io/projected/4138a010-9438-41a8-8113-3067d2e885b4-kube-api-access-cnrvh\") pod \"glance-db-sync-276p7\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.538023 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:12 crc kubenswrapper[4747]: I1126 13:40:12.970427 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-276p7"] Nov 26 13:40:13 crc kubenswrapper[4747]: I1126 13:40:13.615742 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-276p7" event={"ID":"4138a010-9438-41a8-8113-3067d2e885b4","Type":"ContainerStarted","Data":"402b573eb7927542c1008622d36619454e23be2998de46a8e920147d260a2a4e"} Nov 26 13:40:14 crc kubenswrapper[4747]: I1126 13:40:14.628170 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-276p7" event={"ID":"4138a010-9438-41a8-8113-3067d2e885b4","Type":"ContainerStarted","Data":"3a95b1f24bd01352a4b4050d2b436c62a4d82a2178db7b9ad9bb709afb7238f8"} Nov 26 13:40:14 crc kubenswrapper[4747]: I1126 13:40:14.649706 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-276p7" podStartSLOduration=2.649688597 podStartE2EDuration="2.649688597s" podCreationTimestamp="2025-11-26 13:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:40:14.64498374 +0000 UTC m=+1501.631294755" watchObservedRunningTime="2025-11-26 13:40:14.649688597 +0000 UTC m=+1501.635999612" Nov 26 13:40:17 crc kubenswrapper[4747]: I1126 13:40:17.653907 4747 generic.go:334] "Generic (PLEG): container finished" podID="4138a010-9438-41a8-8113-3067d2e885b4" containerID="3a95b1f24bd01352a4b4050d2b436c62a4d82a2178db7b9ad9bb709afb7238f8" exitCode=0 Nov 26 13:40:17 crc kubenswrapper[4747]: I1126 13:40:17.653989 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-276p7" event={"ID":"4138a010-9438-41a8-8113-3067d2e885b4","Type":"ContainerDied","Data":"3a95b1f24bd01352a4b4050d2b436c62a4d82a2178db7b9ad9bb709afb7238f8"} Nov 26 13:40:18 crc kubenswrapper[4747]: I1126 13:40:18.999096 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.088382 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnrvh\" (UniqueName: \"kubernetes.io/projected/4138a010-9438-41a8-8113-3067d2e885b4-kube-api-access-cnrvh\") pod \"4138a010-9438-41a8-8113-3067d2e885b4\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.088440 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-db-sync-config-data\") pod \"4138a010-9438-41a8-8113-3067d2e885b4\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.088501 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-config-data\") pod \"4138a010-9438-41a8-8113-3067d2e885b4\" (UID: \"4138a010-9438-41a8-8113-3067d2e885b4\") " Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.093778 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4138a010-9438-41a8-8113-3067d2e885b4" (UID: "4138a010-9438-41a8-8113-3067d2e885b4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.094037 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4138a010-9438-41a8-8113-3067d2e885b4-kube-api-access-cnrvh" (OuterVolumeSpecName: "kube-api-access-cnrvh") pod "4138a010-9438-41a8-8113-3067d2e885b4" (UID: "4138a010-9438-41a8-8113-3067d2e885b4"). InnerVolumeSpecName "kube-api-access-cnrvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.133251 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-config-data" (OuterVolumeSpecName: "config-data") pod "4138a010-9438-41a8-8113-3067d2e885b4" (UID: "4138a010-9438-41a8-8113-3067d2e885b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.190616 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.190976 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnrvh\" (UniqueName: \"kubernetes.io/projected/4138a010-9438-41a8-8113-3067d2e885b4-kube-api-access-cnrvh\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.190989 4747 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4138a010-9438-41a8-8113-3067d2e885b4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.675157 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-276p7" event={"ID":"4138a010-9438-41a8-8113-3067d2e885b4","Type":"ContainerDied","Data":"402b573eb7927542c1008622d36619454e23be2998de46a8e920147d260a2a4e"} Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.675201 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="402b573eb7927542c1008622d36619454e23be2998de46a8e920147d260a2a4e" Nov 26 13:40:19 crc kubenswrapper[4747]: I1126 13:40:19.675258 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-276p7" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.610639 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:40:20 crc kubenswrapper[4747]: E1126 13:40:20.610923 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4138a010-9438-41a8-8113-3067d2e885b4" containerName="glance-db-sync" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.610934 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4138a010-9438-41a8-8113-3067d2e885b4" containerName="glance-db-sync" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.611071 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="4138a010-9438-41a8-8113-3067d2e885b4" containerName="glance-db-sync" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.611797 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.614446 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-hddh2" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.614685 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.615190 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.624646 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.710730 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.812106 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-config-data\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.812354 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-logs\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.812478 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-sys\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.812571 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.812649 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.812723 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.812791 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-scripts\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.812878 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-run\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.812981 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9qf6\" (UniqueName: \"kubernetes.io/projected/229b992e-fa29-4cd2-9203-0d9b03171a6e-kube-api-access-g9qf6\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.813114 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-dev\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.813272 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.813358 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.813411 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.813436 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.813790 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.837626 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915142 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915363 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-config-data\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915414 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-logs\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915462 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-sys\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915483 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915501 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915518 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915537 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-scripts\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915574 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-run\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915616 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9qf6\" (UniqueName: \"kubernetes.io/projected/229b992e-fa29-4cd2-9203-0d9b03171a6e-kube-api-access-g9qf6\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915634 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-dev\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915644 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915654 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915702 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-sys\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915754 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915774 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915895 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.915987 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.916053 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-run\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.916084 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.916108 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-dev\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.916143 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.916271 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.916312 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-logs\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.919179 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.921994 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-scripts\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.922492 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-config-data\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.933553 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9qf6\" (UniqueName: \"kubernetes.io/projected/229b992e-fa29-4cd2-9203-0d9b03171a6e-kube-api-access-g9qf6\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.942373 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:40:20 crc kubenswrapper[4747]: I1126 13:40:20.953799 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.005833 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.007427 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.009845 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.024995 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.042840 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.044406 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.067632 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119133 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119192 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-dev\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119212 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119230 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119262 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119289 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-config-data\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119317 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119339 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-dev\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119357 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119379 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119426 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-run\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119451 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chc87\" (UniqueName: \"kubernetes.io/projected/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-kube-api-access-chc87\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119474 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119488 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119508 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119648 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-sys\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119713 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-sys\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119751 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119781 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-logs\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119812 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119836 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119894 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119927 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119958 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-scripts\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.119991 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-run\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.120020 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7tqn\" (UniqueName: \"kubernetes.io/projected/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-kube-api-access-n7tqn\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.120041 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-logs\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.120131 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.221888 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222272 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-dev\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222301 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222327 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222346 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-run\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222368 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chc87\" (UniqueName: \"kubernetes.io/projected/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-kube-api-access-chc87\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222398 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-logs\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222420 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-run\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222439 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-sys\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222462 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222481 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222507 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-scripts\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222526 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222545 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222571 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-sys\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222591 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-sys\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222613 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222633 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-logs\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222653 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222672 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222692 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222724 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2crmf\" (UniqueName: \"kubernetes.io/projected/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-kube-api-access-2crmf\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222751 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222771 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222793 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-scripts\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222812 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-dev\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222834 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-run\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222852 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7tqn\" (UniqueName: \"kubernetes.io/projected/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-kube-api-access-n7tqn\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222872 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-logs\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222892 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222931 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222952 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222969 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-dev\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.222987 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.223007 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.223043 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.223112 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.223132 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-config-data\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.223163 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.223193 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.223215 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.223238 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-config-data\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224069 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224150 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224180 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-dev\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224538 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224594 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224623 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-sys\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224655 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-sys\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224683 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224710 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-run\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224707 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224750 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-run\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224878 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224924 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-dev\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224888 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224979 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-logs\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.224878 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.225152 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.225176 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") device mount path \"/mnt/openstack/pv14\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.225255 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.225342 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.225490 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.226237 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-logs\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.227903 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-config-data\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.228268 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.229837 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.234180 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.234602 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-scripts\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.248771 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chc87\" (UniqueName: \"kubernetes.io/projected/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-kube-api-access-chc87\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.254509 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7tqn\" (UniqueName: \"kubernetes.io/projected/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-kube-api-access-n7tqn\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.256619 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.266331 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.266619 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.276693 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.325732 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.325804 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.325830 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.325856 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-config-data\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.325890 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.325908 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.325961 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-logs\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.325982 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-run\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.325998 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-sys\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326020 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-scripts\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326039 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326108 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326140 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2crmf\" (UniqueName: \"kubernetes.io/projected/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-kube-api-access-2crmf\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326166 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-dev\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326223 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326242 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-dev\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326279 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-run\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326305 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-sys\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326431 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-logs\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326459 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.326522 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") device mount path \"/mnt/openstack/pv07\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.328800 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.328900 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.328937 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.328965 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.329718 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.339720 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-scripts\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.350263 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2crmf\" (UniqueName: \"kubernetes.io/projected/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-kube-api-access-2crmf\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.353348 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-config-data\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.354866 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.356108 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.358912 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.372510 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.489708 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.659531 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.690570 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"229b992e-fa29-4cd2-9203-0d9b03171a6e","Type":"ContainerStarted","Data":"370ed4e12cb5bcd472670df6ebc72f0575bb1346e52a0b93c66c38634e1859d2"} Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.690618 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"229b992e-fa29-4cd2-9203-0d9b03171a6e","Type":"ContainerStarted","Data":"e72ae1a8806f8440015700e723613b40c5bdb9401df8d4a6469179bb05d8effb"} Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.864172 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:40:21 crc kubenswrapper[4747]: W1126 13:40:21.873408 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d94e8a6_b950_4373_a5fb_c7bbf8a82650.slice/crio-1909c0cce96441b87bb8d2f665cc0c538fc6c6e7ad28c2396fdb8fa98e3bc869 WatchSource:0}: Error finding container 1909c0cce96441b87bb8d2f665cc0c538fc6c6e7ad28c2396fdb8fa98e3bc869: Status 404 returned error can't find the container with id 1909c0cce96441b87bb8d2f665cc0c538fc6c6e7ad28c2396fdb8fa98e3bc869 Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.880915 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:40:21 crc kubenswrapper[4747]: W1126 13:40:21.895115 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod330f40bb_ffce_4c10_b3d3_9adf6ddacef0.slice/crio-0a14e36fd82039edc093a63af68b83f53a560457c4322837e5d8c2592cda7fb0 WatchSource:0}: Error finding container 0a14e36fd82039edc093a63af68b83f53a560457c4322837e5d8c2592cda7fb0: Status 404 returned error can't find the container with id 0a14e36fd82039edc093a63af68b83f53a560457c4322837e5d8c2592cda7fb0 Nov 26 13:40:21 crc kubenswrapper[4747]: I1126 13:40:21.994878 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:40:21 crc kubenswrapper[4747]: W1126 13:40:21.997260 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fe9dbad_08c5_4b46_a86a_d1731c26cd12.slice/crio-f20820b953129272c8bc9da3ac208b42562aa00d322661d3af8954bffd534352 WatchSource:0}: Error finding container f20820b953129272c8bc9da3ac208b42562aa00d322661d3af8954bffd534352: Status 404 returned error can't find the container with id f20820b953129272c8bc9da3ac208b42562aa00d322661d3af8954bffd534352 Nov 26 13:40:22 crc kubenswrapper[4747]: I1126 13:40:22.990891 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"2d94e8a6-b950-4373-a5fb-c7bbf8a82650","Type":"ContainerStarted","Data":"d11d52dc442d63cd814775b22230f1570579d3c67f103b90d5fe88743d55a04b"} Nov 26 13:40:22 crc kubenswrapper[4747]: I1126 13:40:22.991418 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"2d94e8a6-b950-4373-a5fb-c7bbf8a82650","Type":"ContainerStarted","Data":"76aafa9f8b81d763aee7a07de034cd9a3343518e0e3df1bbd78b35197e863382"} Nov 26 13:40:22 crc kubenswrapper[4747]: I1126 13:40:22.991436 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"2d94e8a6-b950-4373-a5fb-c7bbf8a82650","Type":"ContainerStarted","Data":"1909c0cce96441b87bb8d2f665cc0c538fc6c6e7ad28c2396fdb8fa98e3bc869"} Nov 26 13:40:22 crc kubenswrapper[4747]: I1126 13:40:22.994218 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"229b992e-fa29-4cd2-9203-0d9b03171a6e","Type":"ContainerStarted","Data":"07a08b60e70a0a0cca839ad71264cadb5116278cf7db6ff3caefd6626b55aee6"} Nov 26 13:40:22 crc kubenswrapper[4747]: I1126 13:40:22.996457 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9fe9dbad-08c5-4b46-a86a-d1731c26cd12","Type":"ContainerStarted","Data":"aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc"} Nov 26 13:40:22 crc kubenswrapper[4747]: I1126 13:40:22.996493 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9fe9dbad-08c5-4b46-a86a-d1731c26cd12","Type":"ContainerStarted","Data":"4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5"} Nov 26 13:40:22 crc kubenswrapper[4747]: I1126 13:40:22.996502 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9fe9dbad-08c5-4b46-a86a-d1731c26cd12","Type":"ContainerStarted","Data":"f20820b953129272c8bc9da3ac208b42562aa00d322661d3af8954bffd534352"} Nov 26 13:40:22 crc kubenswrapper[4747]: I1126 13:40:22.996596 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" containerName="glance-log" containerID="cri-o://4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5" gracePeriod=30 Nov 26 13:40:22 crc kubenswrapper[4747]: I1126 13:40:22.996937 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" containerName="glance-httpd" containerID="cri-o://aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc" gracePeriod=30 Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.002706 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"330f40bb-ffce-4c10-b3d3-9adf6ddacef0","Type":"ContainerStarted","Data":"5e02387892639dbdc06f82ad008dde2a4c8f8cafb598bb60d2845823ed8fc3f4"} Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.002758 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"330f40bb-ffce-4c10-b3d3-9adf6ddacef0","Type":"ContainerStarted","Data":"6a4c97fe84dab17321a2402eaa4e4feb55ce6110c66082679600679d8db931d1"} Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.002771 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"330f40bb-ffce-4c10-b3d3-9adf6ddacef0","Type":"ContainerStarted","Data":"0a14e36fd82039edc093a63af68b83f53a560457c4322837e5d8c2592cda7fb0"} Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.029106 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=4.029064805 podStartE2EDuration="4.029064805s" podCreationTimestamp="2025-11-26 13:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:40:23.022305216 +0000 UTC m=+1510.008616231" watchObservedRunningTime="2025-11-26 13:40:23.029064805 +0000 UTC m=+1510.015375830" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.064979 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=4.064957241 podStartE2EDuration="4.064957241s" podCreationTimestamp="2025-11-26 13:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:40:23.047040564 +0000 UTC m=+1510.033351579" watchObservedRunningTime="2025-11-26 13:40:23.064957241 +0000 UTC m=+1510.051268256" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.072176 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=4.072151731 podStartE2EDuration="4.072151731s" podCreationTimestamp="2025-11-26 13:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:40:23.064096159 +0000 UTC m=+1510.050407174" watchObservedRunningTime="2025-11-26 13:40:23.072151731 +0000 UTC m=+1510.058462746" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.098525 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=3.098504209 podStartE2EDuration="3.098504209s" podCreationTimestamp="2025-11-26 13:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:40:23.08772861 +0000 UTC m=+1510.074039625" watchObservedRunningTime="2025-11-26 13:40:23.098504209 +0000 UTC m=+1510.084815234" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.609641 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.661961 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-httpd-run\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.662106 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-scripts\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.662330 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.662352 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.662143 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-var-locks-brick\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663196 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663237 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-sys\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663258 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-nvme\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663328 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-sys" (OuterVolumeSpecName: "sys") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663374 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-logs\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663402 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663578 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-logs" (OuterVolumeSpecName: "logs") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663395 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-dev\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663643 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2crmf\" (UniqueName: \"kubernetes.io/projected/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-kube-api-access-2crmf\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663687 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-dev" (OuterVolumeSpecName: "dev") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663727 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-run\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.663968 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-run" (OuterVolumeSpecName: "run") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664038 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-config-data\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664099 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-iscsi\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664122 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664150 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-lib-modules\") pod \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\" (UID: \"9fe9dbad-08c5-4b46-a86a-d1731c26cd12\") " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664238 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664859 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664877 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664886 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664895 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664905 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664912 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664919 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664928 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.664951 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.667805 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.667847 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-kube-api-access-2crmf" (OuterVolumeSpecName: "kube-api-access-2crmf") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "kube-api-access-2crmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.668030 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-scripts" (OuterVolumeSpecName: "scripts") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.668451 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.697833 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-config-data" (OuterVolumeSpecName: "config-data") pod "9fe9dbad-08c5-4b46-a86a-d1731c26cd12" (UID: "9fe9dbad-08c5-4b46-a86a-d1731c26cd12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.766484 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.766531 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.766571 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.766587 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2crmf\" (UniqueName: \"kubernetes.io/projected/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-kube-api-access-2crmf\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.766600 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe9dbad-08c5-4b46-a86a-d1731c26cd12-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.766618 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.781491 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.782717 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.869023 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:23 crc kubenswrapper[4747]: I1126 13:40:23.869061 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.012623 4747 generic.go:334] "Generic (PLEG): container finished" podID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" containerID="aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc" exitCode=143 Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.012653 4747 generic.go:334] "Generic (PLEG): container finished" podID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" containerID="4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5" exitCode=143 Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.012696 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.012794 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9fe9dbad-08c5-4b46-a86a-d1731c26cd12","Type":"ContainerDied","Data":"aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc"} Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.012885 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9fe9dbad-08c5-4b46-a86a-d1731c26cd12","Type":"ContainerDied","Data":"4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5"} Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.012902 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"9fe9dbad-08c5-4b46-a86a-d1731c26cd12","Type":"ContainerDied","Data":"f20820b953129272c8bc9da3ac208b42562aa00d322661d3af8954bffd534352"} Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.012924 4747 scope.go:117] "RemoveContainer" containerID="aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.048438 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.053977 4747 scope.go:117] "RemoveContainer" containerID="4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.066551 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.087618 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:40:24 crc kubenswrapper[4747]: E1126 13:40:24.088086 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" containerName="glance-log" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.088114 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" containerName="glance-log" Nov 26 13:40:24 crc kubenswrapper[4747]: E1126 13:40:24.088146 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" containerName="glance-httpd" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.088158 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" containerName="glance-httpd" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.088407 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" containerName="glance-log" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.088441 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" containerName="glance-httpd" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.088924 4747 scope.go:117] "RemoveContainer" containerID="aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.089699 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: E1126 13:40:24.090345 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc\": container with ID starting with aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc not found: ID does not exist" containerID="aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.090381 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc"} err="failed to get container status \"aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc\": rpc error: code = NotFound desc = could not find container \"aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc\": container with ID starting with aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc not found: ID does not exist" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.090407 4747 scope.go:117] "RemoveContainer" containerID="4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5" Nov 26 13:40:24 crc kubenswrapper[4747]: E1126 13:40:24.090798 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5\": container with ID starting with 4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5 not found: ID does not exist" containerID="4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.090814 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5"} err="failed to get container status \"4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5\": rpc error: code = NotFound desc = could not find container \"4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5\": container with ID starting with 4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5 not found: ID does not exist" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.090826 4747 scope.go:117] "RemoveContainer" containerID="aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.091990 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc"} err="failed to get container status \"aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc\": rpc error: code = NotFound desc = could not find container \"aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc\": container with ID starting with aacd514f76ff96d156c5fbee05ea2b78ae4b84e3d2acfa0295f5982ba7b601dc not found: ID does not exist" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.092108 4747 scope.go:117] "RemoveContainer" containerID="4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.092643 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5"} err="failed to get container status \"4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5\": rpc error: code = NotFound desc = could not find container \"4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5\": container with ID starting with 4bc12a208dcae8b1e196b923aa2f717bcad95773425423453ae6925fc3c42aa5 not found: ID does not exist" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.097877 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.281296 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-dev\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.281387 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.281468 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-sys\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.281497 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.281539 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-config-data\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.281571 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-logs\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.281648 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.281816 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.281958 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.282033 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.282189 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.282275 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbdqk\" (UniqueName: \"kubernetes.io/projected/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-kube-api-access-zbdqk\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.282362 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-run\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.282585 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-scripts\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.384587 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.384655 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbdqk\" (UniqueName: \"kubernetes.io/projected/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-kube-api-access-zbdqk\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.384693 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-run\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.384798 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-scripts\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.384860 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-dev\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.384930 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.384966 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-sys\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.384961 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-run\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.384998 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") device mount path \"/mnt/openstack/pv07\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.384997 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.385163 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-config-data\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.385219 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-logs\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.385306 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.385346 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-sys\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.385362 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.385344 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.385314 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-dev\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.385556 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.385928 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.385931 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.386115 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-logs\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.386173 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.386239 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.386485 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.386804 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.392166 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-scripts\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.392352 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-config-data\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.411589 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.412928 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.415634 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbdqk\" (UniqueName: \"kubernetes.io/projected/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-kube-api-access-zbdqk\") pod \"glance-default-internal-api-1\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:24 crc kubenswrapper[4747]: I1126 13:40:24.713766 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:25 crc kubenswrapper[4747]: I1126 13:40:25.153301 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:40:25 crc kubenswrapper[4747]: W1126 13:40:25.156386 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4b0bc3e_7c63_442b_91bf_9cb417a13f16.slice/crio-cfdc01c09cb2961fe37acaafa9311ab3b7da82f62bf84dcf8a26ae0431e860c6 WatchSource:0}: Error finding container cfdc01c09cb2961fe37acaafa9311ab3b7da82f62bf84dcf8a26ae0431e860c6: Status 404 returned error can't find the container with id cfdc01c09cb2961fe37acaafa9311ab3b7da82f62bf84dcf8a26ae0431e860c6 Nov 26 13:40:25 crc kubenswrapper[4747]: I1126 13:40:25.808843 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe9dbad-08c5-4b46-a86a-d1731c26cd12" path="/var/lib/kubelet/pods/9fe9dbad-08c5-4b46-a86a-d1731c26cd12/volumes" Nov 26 13:40:26 crc kubenswrapper[4747]: I1126 13:40:26.030317 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"f4b0bc3e-7c63-442b-91bf-9cb417a13f16","Type":"ContainerStarted","Data":"d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4"} Nov 26 13:40:26 crc kubenswrapper[4747]: I1126 13:40:26.030367 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"f4b0bc3e-7c63-442b-91bf-9cb417a13f16","Type":"ContainerStarted","Data":"d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1"} Nov 26 13:40:26 crc kubenswrapper[4747]: I1126 13:40:26.030378 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"f4b0bc3e-7c63-442b-91bf-9cb417a13f16","Type":"ContainerStarted","Data":"cfdc01c09cb2961fe37acaafa9311ab3b7da82f62bf84dcf8a26ae0431e860c6"} Nov 26 13:40:26 crc kubenswrapper[4747]: I1126 13:40:26.060508 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=2.060486685 podStartE2EDuration="2.060486685s" podCreationTimestamp="2025-11-26 13:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:40:26.05224965 +0000 UTC m=+1513.038560685" watchObservedRunningTime="2025-11-26 13:40:26.060486685 +0000 UTC m=+1513.046797700" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.229607 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.230025 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.253431 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.293669 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.330630 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.331106 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.352041 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.355775 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.355821 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.379969 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.397151 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:31 crc kubenswrapper[4747]: I1126 13:40:31.410363 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:32 crc kubenswrapper[4747]: I1126 13:40:32.085988 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:32 crc kubenswrapper[4747]: I1126 13:40:32.087301 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:32 crc kubenswrapper[4747]: I1126 13:40:32.087321 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:32 crc kubenswrapper[4747]: I1126 13:40:32.087330 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:32 crc kubenswrapper[4747]: I1126 13:40:32.087341 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:32 crc kubenswrapper[4747]: I1126 13:40:32.087349 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:33 crc kubenswrapper[4747]: I1126 13:40:33.417315 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:40:33 crc kubenswrapper[4747]: I1126 13:40:33.417659 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.098970 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.099008 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.099021 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.099075 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.099576 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.099720 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.113479 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.126303 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.141082 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.141423 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.180890 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.247612 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.302628 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.714962 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.715008 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.752360 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:34 crc kubenswrapper[4747]: I1126 13:40:34.762165 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:35 crc kubenswrapper[4747]: I1126 13:40:35.109693 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:35 crc kubenswrapper[4747]: I1126 13:40:35.110190 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:36 crc kubenswrapper[4747]: I1126 13:40:36.116170 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" containerName="glance-log" containerID="cri-o://76aafa9f8b81d763aee7a07de034cd9a3343518e0e3df1bbd78b35197e863382" gracePeriod=30 Nov 26 13:40:36 crc kubenswrapper[4747]: I1126 13:40:36.116247 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" containerName="glance-httpd" containerID="cri-o://d11d52dc442d63cd814775b22230f1570579d3c67f103b90d5fe88743d55a04b" gracePeriod=30 Nov 26 13:40:37 crc kubenswrapper[4747]: I1126 13:40:37.031420 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:37 crc kubenswrapper[4747]: I1126 13:40:37.033126 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:40:37 crc kubenswrapper[4747]: I1126 13:40:37.091000 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:40:37 crc kubenswrapper[4747]: I1126 13:40:37.124401 4747 generic.go:334] "Generic (PLEG): container finished" podID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" containerID="76aafa9f8b81d763aee7a07de034cd9a3343518e0e3df1bbd78b35197e863382" exitCode=143 Nov 26 13:40:37 crc kubenswrapper[4747]: I1126 13:40:37.124502 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"2d94e8a6-b950-4373-a5fb-c7bbf8a82650","Type":"ContainerDied","Data":"76aafa9f8b81d763aee7a07de034cd9a3343518e0e3df1bbd78b35197e863382"} Nov 26 13:40:37 crc kubenswrapper[4747]: I1126 13:40:37.124958 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" containerName="glance-log" containerID="cri-o://6a4c97fe84dab17321a2402eaa4e4feb55ce6110c66082679600679d8db931d1" gracePeriod=30 Nov 26 13:40:37 crc kubenswrapper[4747]: I1126 13:40:37.125141 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" containerName="glance-httpd" containerID="cri-o://5e02387892639dbdc06f82ad008dde2a4c8f8cafb598bb60d2845823ed8fc3f4" gracePeriod=30 Nov 26 13:40:38 crc kubenswrapper[4747]: I1126 13:40:38.137205 4747 generic.go:334] "Generic (PLEG): container finished" podID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" containerID="6a4c97fe84dab17321a2402eaa4e4feb55ce6110c66082679600679d8db931d1" exitCode=143 Nov 26 13:40:38 crc kubenswrapper[4747]: I1126 13:40:38.137290 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"330f40bb-ffce-4c10-b3d3-9adf6ddacef0","Type":"ContainerDied","Data":"6a4c97fe84dab17321a2402eaa4e4feb55ce6110c66082679600679d8db931d1"} Nov 26 13:40:40 crc kubenswrapper[4747]: I1126 13:40:40.165077 4747 generic.go:334] "Generic (PLEG): container finished" podID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" containerID="d11d52dc442d63cd814775b22230f1570579d3c67f103b90d5fe88743d55a04b" exitCode=0 Nov 26 13:40:40 crc kubenswrapper[4747]: I1126 13:40:40.165125 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"2d94e8a6-b950-4373-a5fb-c7bbf8a82650","Type":"ContainerDied","Data":"d11d52dc442d63cd814775b22230f1570579d3c67f103b90d5fe88743d55a04b"} Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.185733 4747 generic.go:334] "Generic (PLEG): container finished" podID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" containerID="5e02387892639dbdc06f82ad008dde2a4c8f8cafb598bb60d2845823ed8fc3f4" exitCode=0 Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.185789 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"330f40bb-ffce-4c10-b3d3-9adf6ddacef0","Type":"ContainerDied","Data":"5e02387892639dbdc06f82ad008dde2a4c8f8cafb598bb60d2845823ed8fc3f4"} Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.532326 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.698388 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699133 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-config-data\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699201 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-httpd-run\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699255 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699279 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-scripts\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699323 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-iscsi\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699361 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-logs\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699405 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-nvme\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699425 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-dev\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699470 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-var-locks-brick\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699494 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7tqn\" (UniqueName: \"kubernetes.io/projected/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-kube-api-access-n7tqn\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699522 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-lib-modules\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699560 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-run\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699581 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-sys\") pod \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\" (UID: \"2d94e8a6-b950-4373-a5fb-c7bbf8a82650\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699629 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699722 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-dev" (OuterVolumeSpecName: "dev") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699735 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699767 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699847 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699881 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-run" (OuterVolumeSpecName: "run") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.699906 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700198 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-logs" (OuterVolumeSpecName: "logs") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700236 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-sys" (OuterVolumeSpecName: "sys") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700497 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700512 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700524 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700534 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700543 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700570 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700581 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700591 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.700605 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.704348 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.704459 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage14-crc" (OuterVolumeSpecName: "glance") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "local-storage14-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.704889 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-scripts" (OuterVolumeSpecName: "scripts") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.704965 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-kube-api-access-n7tqn" (OuterVolumeSpecName: "kube-api-access-n7tqn") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "kube-api-access-n7tqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.754077 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-config-data" (OuterVolumeSpecName: "config-data") pod "2d94e8a6-b950-4373-a5fb-c7bbf8a82650" (UID: "2d94e8a6-b950-4373-a5fb-c7bbf8a82650"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.792318 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.801759 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-logs\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.801813 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-var-locks-brick\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.801833 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.801904 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802211 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chc87\" (UniqueName: \"kubernetes.io/projected/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-kube-api-access-chc87\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802231 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802252 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-httpd-run\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802247 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-logs" (OuterVolumeSpecName: "logs") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802316 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-iscsi\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802337 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-config-data\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802369 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-sys\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802388 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-run\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802421 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-dev\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802437 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-lib-modules\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802458 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-nvme\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802500 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-scripts\") pod \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\" (UID: \"330f40bb-ffce-4c10-b3d3-9adf6ddacef0\") " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802629 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-sys" (OuterVolumeSpecName: "sys") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802792 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802803 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802822 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802831 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802843 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802854 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802861 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.802871 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7tqn\" (UniqueName: \"kubernetes.io/projected/2d94e8a6-b950-4373-a5fb-c7bbf8a82650-kube-api-access-n7tqn\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.803443 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.803583 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.804294 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.804425 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-run" (OuterVolumeSpecName: "run") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.804445 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-dev" (OuterVolumeSpecName: "dev") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.804542 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance-cache") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.805135 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.806959 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-kube-api-access-chc87" (OuterVolumeSpecName: "kube-api-access-chc87") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "kube-api-access-chc87". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.808333 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-scripts" (OuterVolumeSpecName: "scripts") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.809375 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.820434 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage14-crc" (UniqueName: "kubernetes.io/local-volume/local-storage14-crc") on node "crc" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.839932 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-config-data" (OuterVolumeSpecName: "config-data") pod "330f40bb-ffce-4c10-b3d3-9adf6ddacef0" (UID: "330f40bb-ffce-4c10-b3d3-9adf6ddacef0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.843876 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.903627 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.903659 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.903669 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.903679 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chc87\" (UniqueName: \"kubernetes.io/projected/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-kube-api-access-chc87\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.903697 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.904507 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.904518 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.904528 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.904537 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.904544 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.904551 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.904559 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.904567 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/330f40bb-ffce-4c10-b3d3-9adf6ddacef0-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.916926 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 26 13:40:42 crc kubenswrapper[4747]: I1126 13:40:42.920112 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.005959 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.005995 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.199182 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.199176 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"330f40bb-ffce-4c10-b3d3-9adf6ddacef0","Type":"ContainerDied","Data":"0a14e36fd82039edc093a63af68b83f53a560457c4322837e5d8c2592cda7fb0"} Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.199378 4747 scope.go:117] "RemoveContainer" containerID="5e02387892639dbdc06f82ad008dde2a4c8f8cafb598bb60d2845823ed8fc3f4" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.203040 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"2d94e8a6-b950-4373-a5fb-c7bbf8a82650","Type":"ContainerDied","Data":"1909c0cce96441b87bb8d2f665cc0c538fc6c6e7ad28c2396fdb8fa98e3bc869"} Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.203204 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.233535 4747 scope.go:117] "RemoveContainer" containerID="6a4c97fe84dab17321a2402eaa4e4feb55ce6110c66082679600679d8db931d1" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.266816 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.272498 4747 scope.go:117] "RemoveContainer" containerID="d11d52dc442d63cd814775b22230f1570579d3c67f103b90d5fe88743d55a04b" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.282780 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.296169 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.299392 4747 scope.go:117] "RemoveContainer" containerID="76aafa9f8b81d763aee7a07de034cd9a3343518e0e3df1bbd78b35197e863382" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.309581 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.319823 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:40:43 crc kubenswrapper[4747]: E1126 13:40:43.320198 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" containerName="glance-log" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.320218 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" containerName="glance-log" Nov 26 13:40:43 crc kubenswrapper[4747]: E1126 13:40:43.320246 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" containerName="glance-httpd" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.320255 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" containerName="glance-httpd" Nov 26 13:40:43 crc kubenswrapper[4747]: E1126 13:40:43.320270 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" containerName="glance-log" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.320279 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" containerName="glance-log" Nov 26 13:40:43 crc kubenswrapper[4747]: E1126 13:40:43.320297 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" containerName="glance-httpd" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.320306 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" containerName="glance-httpd" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.320464 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" containerName="glance-log" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.320479 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" containerName="glance-httpd" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.320492 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" containerName="glance-httpd" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.320517 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" containerName="glance-log" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.321601 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.335129 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.336884 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.349264 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.354955 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.512941 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhl5m\" (UniqueName: \"kubernetes.io/projected/1e2976f0-f700-4faf-8fa1-1b682714edb4-kube-api-access-vhl5m\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.512995 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513031 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-logs\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513116 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513144 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-run\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513165 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513197 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513221 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-dev\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513250 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513276 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513307 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513334 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513353 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-sys\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513389 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpp7d\" (UniqueName: \"kubernetes.io/projected/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-kube-api-access-fpp7d\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513413 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513444 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513471 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-dev\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513496 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-sys\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513572 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513616 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513664 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513706 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513749 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513792 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513875 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513894 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513942 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-run\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.513974 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615275 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615319 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615338 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615358 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615390 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615415 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-run\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615435 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615454 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhl5m\" (UniqueName: \"kubernetes.io/projected/1e2976f0-f700-4faf-8fa1-1b682714edb4-kube-api-access-vhl5m\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615474 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615488 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-logs\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615513 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615528 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-run\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615541 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615559 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615574 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-dev\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615568 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615594 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.615970 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.616002 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.616205 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") device mount path \"/mnt/openstack/pv14\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.616211 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.616289 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.616351 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.616431 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.616747 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-run\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.617381 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-logs\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.619496 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.619659 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.619922 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-dev\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.619964 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-run\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620008 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620080 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620116 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620136 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-sys\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620197 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpp7d\" (UniqueName: \"kubernetes.io/projected/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-kube-api-access-fpp7d\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620233 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620280 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620308 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-dev\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620335 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-sys\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620373 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620407 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620442 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620689 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.620786 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.622520 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.624822 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-sys\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.624901 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.624929 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.624952 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-sys\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.624993 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.625263 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-dev\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.634326 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.636706 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.645142 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.645874 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhl5m\" (UniqueName: \"kubernetes.io/projected/1e2976f0-f700-4faf-8fa1-1b682714edb4-kube-api-access-vhl5m\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.666987 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.672633 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpp7d\" (UniqueName: \"kubernetes.io/projected/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-kube-api-access-fpp7d\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.684740 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.684768 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.687078 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.806466 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d94e8a6-b950-4373-a5fb-c7bbf8a82650" path="/var/lib/kubelet/pods/2d94e8a6-b950-4373-a5fb-c7bbf8a82650/volumes" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.807158 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330f40bb-ffce-4c10-b3d3-9adf6ddacef0" path="/var/lib/kubelet/pods/330f40bb-ffce-4c10-b3d3-9adf6ddacef0/volumes" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.956441 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:43 crc kubenswrapper[4747]: I1126 13:40:43.966314 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:44 crc kubenswrapper[4747]: I1126 13:40:44.410506 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:40:44 crc kubenswrapper[4747]: W1126 13:40:44.414397 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaebdd77_bb9b_4d11_b7cc_1e7fa7dd06b4.slice/crio-527d07e9f317955a7cca1d404424f21612e1fde180db14d25ed323cff956906d WatchSource:0}: Error finding container 527d07e9f317955a7cca1d404424f21612e1fde180db14d25ed323cff956906d: Status 404 returned error can't find the container with id 527d07e9f317955a7cca1d404424f21612e1fde180db14d25ed323cff956906d Nov 26 13:40:44 crc kubenswrapper[4747]: I1126 13:40:44.475254 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:40:45 crc kubenswrapper[4747]: I1126 13:40:45.227512 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1e2976f0-f700-4faf-8fa1-1b682714edb4","Type":"ContainerStarted","Data":"316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268"} Nov 26 13:40:45 crc kubenswrapper[4747]: I1126 13:40:45.227880 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1e2976f0-f700-4faf-8fa1-1b682714edb4","Type":"ContainerStarted","Data":"53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9"} Nov 26 13:40:45 crc kubenswrapper[4747]: I1126 13:40:45.227899 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1e2976f0-f700-4faf-8fa1-1b682714edb4","Type":"ContainerStarted","Data":"66ae0af854c61c9fb21026c6ab633be576abb282f3a21cd8cfcde6469b6a174f"} Nov 26 13:40:45 crc kubenswrapper[4747]: I1126 13:40:45.230008 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4","Type":"ContainerStarted","Data":"f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57"} Nov 26 13:40:45 crc kubenswrapper[4747]: I1126 13:40:45.230039 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4","Type":"ContainerStarted","Data":"9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661"} Nov 26 13:40:45 crc kubenswrapper[4747]: I1126 13:40:45.230070 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4","Type":"ContainerStarted","Data":"527d07e9f317955a7cca1d404424f21612e1fde180db14d25ed323cff956906d"} Nov 26 13:40:45 crc kubenswrapper[4747]: I1126 13:40:45.256154 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.25612155 podStartE2EDuration="2.25612155s" podCreationTimestamp="2025-11-26 13:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:40:45.251863173 +0000 UTC m=+1532.238174238" watchObservedRunningTime="2025-11-26 13:40:45.25612155 +0000 UTC m=+1532.242432615" Nov 26 13:40:45 crc kubenswrapper[4747]: I1126 13:40:45.281525 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.281507124 podStartE2EDuration="2.281507124s" podCreationTimestamp="2025-11-26 13:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:40:45.274648293 +0000 UTC m=+1532.260959318" watchObservedRunningTime="2025-11-26 13:40:45.281507124 +0000 UTC m=+1532.267818129" Nov 26 13:40:53 crc kubenswrapper[4747]: I1126 13:40:53.958086 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:53 crc kubenswrapper[4747]: I1126 13:40:53.958395 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:53 crc kubenswrapper[4747]: I1126 13:40:53.970326 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:53 crc kubenswrapper[4747]: I1126 13:40:53.970626 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:53 crc kubenswrapper[4747]: I1126 13:40:53.987519 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:54 crc kubenswrapper[4747]: I1126 13:40:54.001869 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:54 crc kubenswrapper[4747]: I1126 13:40:54.007142 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:54 crc kubenswrapper[4747]: I1126 13:40:54.037447 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:54 crc kubenswrapper[4747]: I1126 13:40:54.295970 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:54 crc kubenswrapper[4747]: I1126 13:40:54.296023 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:54 crc kubenswrapper[4747]: I1126 13:40:54.296034 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:54 crc kubenswrapper[4747]: I1126 13:40:54.296043 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:56 crc kubenswrapper[4747]: I1126 13:40:56.094269 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:56 crc kubenswrapper[4747]: I1126 13:40:56.096869 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:40:56 crc kubenswrapper[4747]: I1126 13:40:56.339538 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:40:56 crc kubenswrapper[4747]: I1126 13:40:56.339658 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:40:56 crc kubenswrapper[4747]: I1126 13:40:56.347764 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:41:03 crc kubenswrapper[4747]: I1126 13:41:03.417459 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:41:03 crc kubenswrapper[4747]: I1126 13:41:03.417979 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:41:03 crc kubenswrapper[4747]: I1126 13:41:03.418022 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:41:03 crc kubenswrapper[4747]: I1126 13:41:03.418715 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80"} pod="openshift-machine-config-operator/machine-config-daemon-hjc55" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:41:03 crc kubenswrapper[4747]: I1126 13:41:03.418774 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" containerID="cri-o://f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" gracePeriod=600 Nov 26 13:41:03 crc kubenswrapper[4747]: E1126 13:41:03.558617 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:41:04 crc kubenswrapper[4747]: I1126 13:41:04.377375 4747 generic.go:334] "Generic (PLEG): container finished" podID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" exitCode=0 Nov 26 13:41:04 crc kubenswrapper[4747]: I1126 13:41:04.377421 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerDied","Data":"f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80"} Nov 26 13:41:04 crc kubenswrapper[4747]: I1126 13:41:04.377459 4747 scope.go:117] "RemoveContainer" containerID="30240290fdfc14964fe95aadfb3a6164f4e3e5dee1aed61ae4404c0699f012b8" Nov 26 13:41:04 crc kubenswrapper[4747]: I1126 13:41:04.377970 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:41:04 crc kubenswrapper[4747]: E1126 13:41:04.378263 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:41:14 crc kubenswrapper[4747]: I1126 13:41:14.798186 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:41:14 crc kubenswrapper[4747]: E1126 13:41:14.799035 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:41:16 crc kubenswrapper[4747]: E1126 13:41:16.819619 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Nov 26 13:41:26 crc kubenswrapper[4747]: I1126 13:41:26.798962 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:41:26 crc kubenswrapper[4747]: E1126 13:41:26.799657 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:41:27 crc kubenswrapper[4747]: E1126 13:41:27.000271 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Nov 26 13:41:37 crc kubenswrapper[4747]: E1126 13:41:37.169684 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Nov 26 13:41:37 crc kubenswrapper[4747]: I1126 13:41:37.799507 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:41:37 crc kubenswrapper[4747]: E1126 13:41:37.801852 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:41:40 crc kubenswrapper[4747]: I1126 13:41:40.293922 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:41:40 crc kubenswrapper[4747]: I1126 13:41:40.294580 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="229b992e-fa29-4cd2-9203-0d9b03171a6e" containerName="glance-log" containerID="cri-o://370ed4e12cb5bcd472670df6ebc72f0575bb1346e52a0b93c66c38634e1859d2" gracePeriod=30 Nov 26 13:41:40 crc kubenswrapper[4747]: I1126 13:41:40.294627 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="229b992e-fa29-4cd2-9203-0d9b03171a6e" containerName="glance-httpd" containerID="cri-o://07a08b60e70a0a0cca839ad71264cadb5116278cf7db6ff3caefd6626b55aee6" gracePeriod=30 Nov 26 13:41:40 crc kubenswrapper[4747]: I1126 13:41:40.470033 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:41:40 crc kubenswrapper[4747]: I1126 13:41:40.470340 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" containerName="glance-log" containerID="cri-o://d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1" gracePeriod=30 Nov 26 13:41:40 crc kubenswrapper[4747]: I1126 13:41:40.470463 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" containerName="glance-httpd" containerID="cri-o://d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4" gracePeriod=30 Nov 26 13:41:40 crc kubenswrapper[4747]: I1126 13:41:40.655382 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" containerID="d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1" exitCode=143 Nov 26 13:41:40 crc kubenswrapper[4747]: I1126 13:41:40.655463 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"f4b0bc3e-7c63-442b-91bf-9cb417a13f16","Type":"ContainerDied","Data":"d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1"} Nov 26 13:41:40 crc kubenswrapper[4747]: I1126 13:41:40.657345 4747 generic.go:334] "Generic (PLEG): container finished" podID="229b992e-fa29-4cd2-9203-0d9b03171a6e" containerID="370ed4e12cb5bcd472670df6ebc72f0575bb1346e52a0b93c66c38634e1859d2" exitCode=143 Nov 26 13:41:40 crc kubenswrapper[4747]: I1126 13:41:40.657402 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"229b992e-fa29-4cd2-9203-0d9b03171a6e","Type":"ContainerDied","Data":"370ed4e12cb5bcd472670df6ebc72f0575bb1346e52a0b93c66c38634e1859d2"} Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.714015 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-276p7"] Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.721398 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-276p7"] Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.766838 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.767081 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerName="glance-log" containerID="cri-o://53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9" gracePeriod=30 Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.767219 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerName="glance-httpd" containerID="cri-o://316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268" gracePeriod=30 Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.809409 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4138a010-9438-41a8-8113-3067d2e885b4" path="/var/lib/kubelet/pods/4138a010-9438-41a8-8113-3067d2e885b4/volumes" Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.818513 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance2a2e-account-delete-d7bst"] Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.819345 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.833996 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.838848 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerName="glance-log" containerID="cri-o://9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661" gracePeriod=30 Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.839001 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerName="glance-httpd" containerID="cri-o://f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57" gracePeriod=30 Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.852303 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance2a2e-account-delete-d7bst"] Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.901243 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4134af1-bd81-496c-9720-da5b89f55f71-operator-scripts\") pod \"glance2a2e-account-delete-d7bst\" (UID: \"b4134af1-bd81-496c-9720-da5b89f55f71\") " pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" Nov 26 13:41:41 crc kubenswrapper[4747]: I1126 13:41:41.901386 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqwsq\" (UniqueName: \"kubernetes.io/projected/b4134af1-bd81-496c-9720-da5b89f55f71-kube-api-access-rqwsq\") pod \"glance2a2e-account-delete-d7bst\" (UID: \"b4134af1-bd81-496c-9720-da5b89f55f71\") " pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.002596 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4134af1-bd81-496c-9720-da5b89f55f71-operator-scripts\") pod \"glance2a2e-account-delete-d7bst\" (UID: \"b4134af1-bd81-496c-9720-da5b89f55f71\") " pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.002675 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqwsq\" (UniqueName: \"kubernetes.io/projected/b4134af1-bd81-496c-9720-da5b89f55f71-kube-api-access-rqwsq\") pod \"glance2a2e-account-delete-d7bst\" (UID: \"b4134af1-bd81-496c-9720-da5b89f55f71\") " pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.003908 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4134af1-bd81-496c-9720-da5b89f55f71-operator-scripts\") pod \"glance2a2e-account-delete-d7bst\" (UID: \"b4134af1-bd81-496c-9720-da5b89f55f71\") " pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.022459 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqwsq\" (UniqueName: \"kubernetes.io/projected/b4134af1-bd81-496c-9720-da5b89f55f71-kube-api-access-rqwsq\") pod \"glance2a2e-account-delete-d7bst\" (UID: \"b4134af1-bd81-496c-9720-da5b89f55f71\") " pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.138315 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.622767 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance2a2e-account-delete-d7bst"] Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.677966 4747 generic.go:334] "Generic (PLEG): container finished" podID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerID="53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9" exitCode=143 Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.678011 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1e2976f0-f700-4faf-8fa1-1b682714edb4","Type":"ContainerDied","Data":"53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9"} Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.679801 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" event={"ID":"b4134af1-bd81-496c-9720-da5b89f55f71","Type":"ContainerStarted","Data":"9632e10df1128946ba5806580767968af55b31bfb6c6a436dc12748860ca5684"} Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.681824 4747 generic.go:334] "Generic (PLEG): container finished" podID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerID="9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661" exitCode=143 Nov 26 13:41:42 crc kubenswrapper[4747]: I1126 13:41:42.681858 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4","Type":"ContainerDied","Data":"9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661"} Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.697938 4747 generic.go:334] "Generic (PLEG): container finished" podID="b4134af1-bd81-496c-9720-da5b89f55f71" containerID="b97de9da232f29d353e3c39c14c313b58587c293fb34e2f94b7ae5dc05ea9e07" exitCode=0 Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.698262 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" event={"ID":"b4134af1-bd81-496c-9720-da5b89f55f71","Type":"ContainerDied","Data":"b97de9da232f29d353e3c39c14c313b58587c293fb34e2f94b7ae5dc05ea9e07"} Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.700903 4747 generic.go:334] "Generic (PLEG): container finished" podID="229b992e-fa29-4cd2-9203-0d9b03171a6e" containerID="07a08b60e70a0a0cca839ad71264cadb5116278cf7db6ff3caefd6626b55aee6" exitCode=0 Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.700927 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"229b992e-fa29-4cd2-9203-0d9b03171a6e","Type":"ContainerDied","Data":"07a08b60e70a0a0cca839ad71264cadb5116278cf7db6ff3caefd6626b55aee6"} Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.920890 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.931798 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-nvme\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.931865 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-httpd-run\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.931929 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-var-locks-brick\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.931976 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-scripts\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932092 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-run\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932159 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932213 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-config-data\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932257 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-dev\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932340 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-logs\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932370 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-lib-modules\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932407 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932449 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-iscsi\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932484 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-sys\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932514 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9qf6\" (UniqueName: \"kubernetes.io/projected/229b992e-fa29-4cd2-9203-0d9b03171a6e-kube-api-access-g9qf6\") pod \"229b992e-fa29-4cd2-9203-0d9b03171a6e\" (UID: \"229b992e-fa29-4cd2-9203-0d9b03171a6e\") " Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932516 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932572 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932609 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-dev" (OuterVolumeSpecName: "dev") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932638 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.932977 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.933003 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.933019 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.933036 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.934172 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-run" (OuterVolumeSpecName: "run") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.934450 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.934516 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-sys" (OuterVolumeSpecName: "sys") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.934536 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-logs" (OuterVolumeSpecName: "logs") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.934544 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.941402 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-scripts" (OuterVolumeSpecName: "scripts") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.941455 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/229b992e-fa29-4cd2-9203-0d9b03171a6e-kube-api-access-g9qf6" (OuterVolumeSpecName: "kube-api-access-g9qf6") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "kube-api-access-g9qf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.950270 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.954276 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:41:43 crc kubenswrapper[4747]: I1126 13:41:43.989555 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-config-data" (OuterVolumeSpecName: "config-data") pod "229b992e-fa29-4cd2-9203-0d9b03171a6e" (UID: "229b992e-fa29-4cd2-9203-0d9b03171a6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.038419 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.040066 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.040172 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/229b992e-fa29-4cd2-9203-0d9b03171a6e-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.040261 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.040347 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.040405 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.040474 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9qf6\" (UniqueName: \"kubernetes.io/projected/229b992e-fa29-4cd2-9203-0d9b03171a6e-kube-api-access-g9qf6\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.040530 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.040583 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/229b992e-fa29-4cd2-9203-0d9b03171a6e-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.040920 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/229b992e-fa29-4cd2-9203-0d9b03171a6e-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.040995 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.055478 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.069306 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.142470 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-var-locks-brick\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.142764 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-logs\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.142874 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-httpd-run\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.142957 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-nvme\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143136 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.142601 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143077 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143222 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143308 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-logs" (OuterVolumeSpecName: "logs") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143314 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbdqk\" (UniqueName: \"kubernetes.io/projected/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-kube-api-access-zbdqk\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143412 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-dev\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143436 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-config-data\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143476 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-dev" (OuterVolumeSpecName: "dev") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143504 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-run\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143534 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-iscsi\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143550 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-scripts\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143575 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143582 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143582 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-run" (OuterVolumeSpecName: "run") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143610 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-sys\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.143628 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-lib-modules\") pod \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\" (UID: \"f4b0bc3e-7c63-442b-91bf-9cb417a13f16\") " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144095 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144110 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144118 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144126 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144135 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144143 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144150 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144159 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144166 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144166 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-sys" (OuterVolumeSpecName: "sys") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.144206 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.146328 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-kube-api-access-zbdqk" (OuterVolumeSpecName: "kube-api-access-zbdqk") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "kube-api-access-zbdqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.148130 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.148575 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-scripts" (OuterVolumeSpecName: "scripts") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.153321 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.188228 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-config-data" (OuterVolumeSpecName: "config-data") pod "f4b0bc3e-7c63-442b-91bf-9cb417a13f16" (UID: "f4b0bc3e-7c63-442b-91bf-9cb417a13f16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.246414 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbdqk\" (UniqueName: \"kubernetes.io/projected/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-kube-api-access-zbdqk\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.246488 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.246515 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.246580 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.246609 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.246635 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b0bc3e-7c63-442b-91bf-9cb417a13f16-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.246672 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.262609 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.275039 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.348026 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.348089 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.713102 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" containerID="d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4" exitCode=0 Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.713145 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.713160 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"f4b0bc3e-7c63-442b-91bf-9cb417a13f16","Type":"ContainerDied","Data":"d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4"} Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.713222 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"f4b0bc3e-7c63-442b-91bf-9cb417a13f16","Type":"ContainerDied","Data":"cfdc01c09cb2961fe37acaafa9311ab3b7da82f62bf84dcf8a26ae0431e860c6"} Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.713241 4747 scope.go:117] "RemoveContainer" containerID="d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.716264 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"229b992e-fa29-4cd2-9203-0d9b03171a6e","Type":"ContainerDied","Data":"e72ae1a8806f8440015700e723613b40c5bdb9401df8d4a6469179bb05d8effb"} Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.716283 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.752034 4747 scope.go:117] "RemoveContainer" containerID="d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.780864 4747 scope.go:117] "RemoveContainer" containerID="d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4" Nov 26 13:41:44 crc kubenswrapper[4747]: E1126 13:41:44.781858 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4\": container with ID starting with d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4 not found: ID does not exist" containerID="d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.781897 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4"} err="failed to get container status \"d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4\": rpc error: code = NotFound desc = could not find container \"d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4\": container with ID starting with d0e943d083f87159aae323d81187553d3a3fd3df6003046444665a3b1ded80e4 not found: ID does not exist" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.781921 4747 scope.go:117] "RemoveContainer" containerID="d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1" Nov 26 13:41:44 crc kubenswrapper[4747]: E1126 13:41:44.782159 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1\": container with ID starting with d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1 not found: ID does not exist" containerID="d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.782196 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1"} err="failed to get container status \"d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1\": rpc error: code = NotFound desc = could not find container \"d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1\": container with ID starting with d0e5b063e759157703699cf6b5cf5e62469fa2dcaaea6615b8cda947a47de5f1 not found: ID does not exist" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.782212 4747 scope.go:117] "RemoveContainer" containerID="07a08b60e70a0a0cca839ad71264cadb5116278cf7db6ff3caefd6626b55aee6" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.795397 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.803089 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.812980 4747 scope.go:117] "RemoveContainer" containerID="370ed4e12cb5bcd472670df6ebc72f0575bb1346e52a0b93c66c38634e1859d2" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.819019 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.837764 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.956614 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.135:9292/healthcheck\": read tcp 10.217.0.2:39006->10.217.0.135:9292: read: connection reset by peer" Nov 26 13:41:44 crc kubenswrapper[4747]: I1126 13:41:44.956638 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.135:9292/healthcheck\": read tcp 10.217.0.2:39010->10.217.0.135:9292: read: connection reset by peer" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:44.999926 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.136:9292/healthcheck\": read tcp 10.217.0.2:44476->10.217.0.136:9292: read: connection reset by peer" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:44.999943 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.136:9292/healthcheck\": read tcp 10.217.0.2:44492->10.217.0.136:9292: read: connection reset by peer" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.255803 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.332090 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.367661 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-dev\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.367724 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4134af1-bd81-496c-9720-da5b89f55f71-operator-scripts\") pod \"b4134af1-bd81-496c-9720-da5b89f55f71\" (UID: \"b4134af1-bd81-496c-9720-da5b89f55f71\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.367785 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-config-data\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.367811 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpp7d\" (UniqueName: \"kubernetes.io/projected/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-kube-api-access-fpp7d\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.367840 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.367861 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqwsq\" (UniqueName: \"kubernetes.io/projected/b4134af1-bd81-496c-9720-da5b89f55f71-kube-api-access-rqwsq\") pod \"b4134af1-bd81-496c-9720-da5b89f55f71\" (UID: \"b4134af1-bd81-496c-9720-da5b89f55f71\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.367901 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-sys\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.367929 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-scripts\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.367953 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-run\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.367988 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.368005 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-iscsi\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.368036 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-logs\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.368141 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-httpd-run\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.368184 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-nvme\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.368247 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-var-locks-brick\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.368267 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-lib-modules\") pod \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\" (UID: \"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.368624 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-run" (OuterVolumeSpecName: "run") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.368653 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.368626 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4134af1-bd81-496c-9720-da5b89f55f71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4134af1-bd81-496c-9720-da5b89f55f71" (UID: "b4134af1-bd81-496c-9720-da5b89f55f71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.369209 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-sys" (OuterVolumeSpecName: "sys") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.369266 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-dev" (OuterVolumeSpecName: "dev") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.369611 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.369648 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.369919 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-logs" (OuterVolumeSpecName: "logs") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.373217 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-scripts" (OuterVolumeSpecName: "scripts") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.373282 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.373302 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4134af1-bd81-496c-9720-da5b89f55f71-kube-api-access-rqwsq" (OuterVolumeSpecName: "kube-api-access-rqwsq") pod "b4134af1-bd81-496c-9720-da5b89f55f71" (UID: "b4134af1-bd81-496c-9720-da5b89f55f71"). InnerVolumeSpecName "kube-api-access-rqwsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.373311 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.373872 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage14-crc" (OuterVolumeSpecName: "glance") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "local-storage14-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.373928 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.377269 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-kube-api-access-fpp7d" (OuterVolumeSpecName: "kube-api-access-fpp7d") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "kube-api-access-fpp7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.405951 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.406180 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-config-data" (OuterVolumeSpecName: "config-data") pod "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" (UID: "baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.469237 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-httpd-run\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.469492 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.469558 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-run\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.469584 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.469629 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-run" (OuterVolumeSpecName: "run") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.469662 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-var-locks-brick\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.469726 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.469755 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-scripts\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.469919 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-dev" (OuterVolumeSpecName: "dev") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.469792 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-dev\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470105 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470131 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-lib-modules\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470162 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhl5m\" (UniqueName: \"kubernetes.io/projected/1e2976f0-f700-4faf-8fa1-1b682714edb4-kube-api-access-vhl5m\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470194 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-nvme\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470259 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-sys\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470306 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470329 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-iscsi\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470316 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470345 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-sys" (OuterVolumeSpecName: "sys") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470355 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-config-data\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470368 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470382 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-logs\") pod \"1e2976f0-f700-4faf-8fa1-1b682714edb4\" (UID: \"1e2976f0-f700-4faf-8fa1-1b682714edb4\") " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470655 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470669 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470680 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470689 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470711 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470723 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470733 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470744 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470752 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470760 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470768 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470776 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470784 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470791 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470800 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470807 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470817 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470825 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470810 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-logs" (OuterVolumeSpecName: "logs") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470833 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4134af1-bd81-496c-9720-da5b89f55f71-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470888 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470906 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpp7d\" (UniqueName: \"kubernetes.io/projected/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4-kube-api-access-fpp7d\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470919 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1e2976f0-f700-4faf-8fa1-1b682714edb4-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470950 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.470964 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqwsq\" (UniqueName: \"kubernetes.io/projected/b4134af1-bd81-496c-9720-da5b89f55f71-kube-api-access-rqwsq\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.473312 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2976f0-f700-4faf-8fa1-1b682714edb4-kube-api-access-vhl5m" (OuterVolumeSpecName: "kube-api-access-vhl5m") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "kube-api-access-vhl5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.473375 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-scripts" (OuterVolumeSpecName: "scripts") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.474437 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.475426 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance-cache") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.487552 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.493103 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage14-crc" (UniqueName: "kubernetes.io/local-volume/local-storage14-crc") on node "crc" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.529367 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-config-data" (OuterVolumeSpecName: "config-data") pod "1e2976f0-f700-4faf-8fa1-1b682714edb4" (UID: "1e2976f0-f700-4faf-8fa1-1b682714edb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.572651 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.572684 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhl5m\" (UniqueName: \"kubernetes.io/projected/1e2976f0-f700-4faf-8fa1-1b682714edb4-kube-api-access-vhl5m\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.572696 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.572705 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e2976f0-f700-4faf-8fa1-1b682714edb4-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.572719 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.572727 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.572735 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2976f0-f700-4faf-8fa1-1b682714edb4-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.572745 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.587845 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.588668 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.673663 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.673966 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.729157 4747 generic.go:334] "Generic (PLEG): container finished" podID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerID="f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57" exitCode=0 Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.729239 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.729250 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4","Type":"ContainerDied","Data":"f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57"} Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.731203 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4","Type":"ContainerDied","Data":"527d07e9f317955a7cca1d404424f21612e1fde180db14d25ed323cff956906d"} Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.731237 4747 scope.go:117] "RemoveContainer" containerID="f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.732969 4747 generic.go:334] "Generic (PLEG): container finished" podID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerID="316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268" exitCode=0 Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.733025 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.733047 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1e2976f0-f700-4faf-8fa1-1b682714edb4","Type":"ContainerDied","Data":"316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268"} Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.733191 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1e2976f0-f700-4faf-8fa1-1b682714edb4","Type":"ContainerDied","Data":"66ae0af854c61c9fb21026c6ab633be576abb282f3a21cd8cfcde6469b6a174f"} Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.734609 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" event={"ID":"b4134af1-bd81-496c-9720-da5b89f55f71","Type":"ContainerDied","Data":"9632e10df1128946ba5806580767968af55b31bfb6c6a436dc12748860ca5684"} Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.734635 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9632e10df1128946ba5806580767968af55b31bfb6c6a436dc12748860ca5684" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.734672 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance2a2e-account-delete-d7bst" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.765290 4747 scope.go:117] "RemoveContainer" containerID="9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.782464 4747 scope.go:117] "RemoveContainer" containerID="f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57" Nov 26 13:41:45 crc kubenswrapper[4747]: E1126 13:41:45.783231 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57\": container with ID starting with f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57 not found: ID does not exist" containerID="f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.783260 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57"} err="failed to get container status \"f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57\": rpc error: code = NotFound desc = could not find container \"f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57\": container with ID starting with f9852cca59176ab313547b920f7e5c52669e7d94b0554e1d45ac15a103279e57 not found: ID does not exist" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.783277 4747 scope.go:117] "RemoveContainer" containerID="9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661" Nov 26 13:41:45 crc kubenswrapper[4747]: E1126 13:41:45.783547 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661\": container with ID starting with 9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661 not found: ID does not exist" containerID="9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.783572 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661"} err="failed to get container status \"9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661\": rpc error: code = NotFound desc = could not find container \"9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661\": container with ID starting with 9aefd77d99e4ac48207418b180b73287ed7c7973ca2bbd58eb06c7f43b3a1661 not found: ID does not exist" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.783587 4747 scope.go:117] "RemoveContainer" containerID="316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.794495 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.806879 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="229b992e-fa29-4cd2-9203-0d9b03171a6e" path="/var/lib/kubelet/pods/229b992e-fa29-4cd2-9203-0d9b03171a6e/volumes" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.807797 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" path="/var/lib/kubelet/pods/f4b0bc3e-7c63-442b-91bf-9cb417a13f16/volumes" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.808327 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.813307 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.816121 4747 scope.go:117] "RemoveContainer" containerID="53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.818603 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.831384 4747 scope.go:117] "RemoveContainer" containerID="316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268" Nov 26 13:41:45 crc kubenswrapper[4747]: E1126 13:41:45.832444 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268\": container with ID starting with 316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268 not found: ID does not exist" containerID="316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.832692 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268"} err="failed to get container status \"316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268\": rpc error: code = NotFound desc = could not find container \"316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268\": container with ID starting with 316b524a02b9e0272310747f50d36ad8240d56e5666f136333c91c4a3f04d268 not found: ID does not exist" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.832722 4747 scope.go:117] "RemoveContainer" containerID="53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9" Nov 26 13:41:45 crc kubenswrapper[4747]: E1126 13:41:45.833165 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9\": container with ID starting with 53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9 not found: ID does not exist" containerID="53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9" Nov 26 13:41:45 crc kubenswrapper[4747]: I1126 13:41:45.833211 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9"} err="failed to get container status \"53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9\": rpc error: code = NotFound desc = could not find container \"53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9\": container with ID starting with 53b714763a01f64c4649f8c6ba616bf13bc33212f13cbe96ad60b69cd8e50ef9 not found: ID does not exist" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.166879 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.167156 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstackclient" podUID="48756a07-259f-47bf-9088-0364f426fb71" containerName="openstackclient" containerID="cri-o://c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee" gracePeriod=30 Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.607531 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.687601 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-config\") pod \"48756a07-259f-47bf-9088-0364f426fb71\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.687692 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-scripts\") pod \"48756a07-259f-47bf-9088-0364f426fb71\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.687722 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdphd\" (UniqueName: \"kubernetes.io/projected/48756a07-259f-47bf-9088-0364f426fb71-kube-api-access-zdphd\") pod \"48756a07-259f-47bf-9088-0364f426fb71\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.687794 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/48756a07-259f-47bf-9088-0364f426fb71-openstack-config-secret\") pod \"48756a07-259f-47bf-9088-0364f426fb71\" (UID: \"48756a07-259f-47bf-9088-0364f426fb71\") " Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.688685 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-scripts" (OuterVolumeSpecName: "openstack-scripts") pod "48756a07-259f-47bf-9088-0364f426fb71" (UID: "48756a07-259f-47bf-9088-0364f426fb71"). InnerVolumeSpecName "openstack-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.692793 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48756a07-259f-47bf-9088-0364f426fb71-kube-api-access-zdphd" (OuterVolumeSpecName: "kube-api-access-zdphd") pod "48756a07-259f-47bf-9088-0364f426fb71" (UID: "48756a07-259f-47bf-9088-0364f426fb71"). InnerVolumeSpecName "kube-api-access-zdphd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.704569 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "48756a07-259f-47bf-9088-0364f426fb71" (UID: "48756a07-259f-47bf-9088-0364f426fb71"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.706435 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48756a07-259f-47bf-9088-0364f426fb71-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "48756a07-259f-47bf-9088-0364f426fb71" (UID: "48756a07-259f-47bf-9088-0364f426fb71"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.748186 4747 generic.go:334] "Generic (PLEG): container finished" podID="48756a07-259f-47bf-9088-0364f426fb71" containerID="c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee" exitCode=143 Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.748264 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"48756a07-259f-47bf-9088-0364f426fb71","Type":"ContainerDied","Data":"c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee"} Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.748270 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.748303 4747 scope.go:117] "RemoveContainer" containerID="c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.748292 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"48756a07-259f-47bf-9088-0364f426fb71","Type":"ContainerDied","Data":"0937c56297824e8e44145aebeb1544a5e917f880e1866f633721b930a8b63908"} Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.788507 4747 scope.go:117] "RemoveContainer" containerID="c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee" Nov 26 13:41:46 crc kubenswrapper[4747]: E1126 13:41:46.789629 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee\": container with ID starting with c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee not found: ID does not exist" containerID="c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.789674 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee"} err="failed to get container status \"c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee\": rpc error: code = NotFound desc = could not find container \"c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee\": container with ID starting with c6e6607904364e5e6f561ad3a1b99aa217be6c6ec4bf5818a7be420a716eb6ee not found: ID does not exist" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.793246 4747 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.793307 4747 reconciler_common.go:293] "Volume detached for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/48756a07-259f-47bf-9088-0364f426fb71-openstack-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.793327 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdphd\" (UniqueName: \"kubernetes.io/projected/48756a07-259f-47bf-9088-0364f426fb71-kube-api-access-zdphd\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.793347 4747 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/48756a07-259f-47bf-9088-0364f426fb71-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.814617 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.823204 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.847573 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-p6d2d"] Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.853672 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-p6d2d"] Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.858845 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance2a2e-account-delete-d7bst"] Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.863524 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs"] Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.869124 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-2a2e-account-create-update-w4fvs"] Nov 26 13:41:46 crc kubenswrapper[4747]: I1126 13:41:46.874504 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance2a2e-account-delete-d7bst"] Nov 26 13:41:47 crc kubenswrapper[4747]: E1126 13:41:47.351867 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Nov 26 13:41:47 crc kubenswrapper[4747]: I1126 13:41:47.808881 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" path="/var/lib/kubelet/pods/1e2976f0-f700-4faf-8fa1-1b682714edb4/volumes" Nov 26 13:41:47 crc kubenswrapper[4747]: I1126 13:41:47.809677 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48756a07-259f-47bf-9088-0364f426fb71" path="/var/lib/kubelet/pods/48756a07-259f-47bf-9088-0364f426fb71/volumes" Nov 26 13:41:47 crc kubenswrapper[4747]: I1126 13:41:47.810167 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="896a016c-4ec2-4a99-9927-1e3464105999" path="/var/lib/kubelet/pods/896a016c-4ec2-4a99-9927-1e3464105999/volumes" Nov 26 13:41:47 crc kubenswrapper[4747]: I1126 13:41:47.811220 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e58e20b-6846-447e-95e9-2d81f8688d6f" path="/var/lib/kubelet/pods/8e58e20b-6846-447e-95e9-2d81f8688d6f/volumes" Nov 26 13:41:47 crc kubenswrapper[4747]: I1126 13:41:47.811668 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4134af1-bd81-496c-9720-da5b89f55f71" path="/var/lib/kubelet/pods/b4134af1-bd81-496c-9720-da5b89f55f71/volumes" Nov 26 13:41:47 crc kubenswrapper[4747]: I1126 13:41:47.812175 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" path="/var/lib/kubelet/pods/baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4/volumes" Nov 26 13:41:49 crc kubenswrapper[4747]: I1126 13:41:49.798948 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:41:49 crc kubenswrapper[4747]: E1126 13:41:49.799842 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.125700 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-8jt54"] Nov 26 13:41:51 crc kubenswrapper[4747]: E1126 13:41:51.126118 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126445 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: E1126 13:41:51.126472 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126480 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: E1126 13:41:51.126506 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126517 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: E1126 13:41:51.126544 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="229b992e-fa29-4cd2-9203-0d9b03171a6e" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126554 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="229b992e-fa29-4cd2-9203-0d9b03171a6e" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: E1126 13:41:51.126568 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126581 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: E1126 13:41:51.126599 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126609 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: E1126 13:41:51.126629 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126638 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: E1126 13:41:51.126658 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48756a07-259f-47bf-9088-0364f426fb71" containerName="openstackclient" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126667 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="48756a07-259f-47bf-9088-0364f426fb71" containerName="openstackclient" Nov 26 13:41:51 crc kubenswrapper[4747]: E1126 13:41:51.126682 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="229b992e-fa29-4cd2-9203-0d9b03171a6e" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126692 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="229b992e-fa29-4cd2-9203-0d9b03171a6e" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: E1126 13:41:51.126708 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4134af1-bd81-496c-9720-da5b89f55f71" containerName="mariadb-account-delete" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126718 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4134af1-bd81-496c-9720-da5b89f55f71" containerName="mariadb-account-delete" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126899 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4134af1-bd81-496c-9720-da5b89f55f71" containerName="mariadb-account-delete" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126917 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126939 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="229b992e-fa29-4cd2-9203-0d9b03171a6e" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126954 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126971 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b0bc3e-7c63-442b-91bf-9cb417a13f16" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.126989 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.127001 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="48756a07-259f-47bf-9088-0364f426fb71" containerName="openstackclient" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.127016 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="baebdd77-bb9b-4d11-b7cc-1e7fa7dd06b4" containerName="glance-httpd" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.127032 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2976f0-f700-4faf-8fa1-1b682714edb4" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.127072 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="229b992e-fa29-4cd2-9203-0d9b03171a6e" containerName="glance-log" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.127719 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-8jt54" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.137576 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-8jt54"] Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.155908 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-600c-account-create-update-vq9k8"] Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.156968 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.160334 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5pw\" (UniqueName: \"kubernetes.io/projected/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-kube-api-access-zx5pw\") pod \"glance-db-create-8jt54\" (UID: \"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d\") " pod="glance-kuttl-tests/glance-db-create-8jt54" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.160381 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-operator-scripts\") pod \"glance-db-create-8jt54\" (UID: \"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d\") " pod="glance-kuttl-tests/glance-db-create-8jt54" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.163244 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.174136 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-600c-account-create-update-vq9k8"] Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.261865 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39a1e2aa-8e90-4d50-aa8a-905474945f53-operator-scripts\") pod \"glance-600c-account-create-update-vq9k8\" (UID: \"39a1e2aa-8e90-4d50-aa8a-905474945f53\") " pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.261965 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx5pw\" (UniqueName: \"kubernetes.io/projected/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-kube-api-access-zx5pw\") pod \"glance-db-create-8jt54\" (UID: \"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d\") " pod="glance-kuttl-tests/glance-db-create-8jt54" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.261997 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-operator-scripts\") pod \"glance-db-create-8jt54\" (UID: \"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d\") " pod="glance-kuttl-tests/glance-db-create-8jt54" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.262028 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqsn8\" (UniqueName: \"kubernetes.io/projected/39a1e2aa-8e90-4d50-aa8a-905474945f53-kube-api-access-wqsn8\") pod \"glance-600c-account-create-update-vq9k8\" (UID: \"39a1e2aa-8e90-4d50-aa8a-905474945f53\") " pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.262775 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-operator-scripts\") pod \"glance-db-create-8jt54\" (UID: \"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d\") " pod="glance-kuttl-tests/glance-db-create-8jt54" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.284289 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx5pw\" (UniqueName: \"kubernetes.io/projected/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-kube-api-access-zx5pw\") pod \"glance-db-create-8jt54\" (UID: \"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d\") " pod="glance-kuttl-tests/glance-db-create-8jt54" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.363828 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39a1e2aa-8e90-4d50-aa8a-905474945f53-operator-scripts\") pod \"glance-600c-account-create-update-vq9k8\" (UID: \"39a1e2aa-8e90-4d50-aa8a-905474945f53\") " pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.364246 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqsn8\" (UniqueName: \"kubernetes.io/projected/39a1e2aa-8e90-4d50-aa8a-905474945f53-kube-api-access-wqsn8\") pod \"glance-600c-account-create-update-vq9k8\" (UID: \"39a1e2aa-8e90-4d50-aa8a-905474945f53\") " pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.365390 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39a1e2aa-8e90-4d50-aa8a-905474945f53-operator-scripts\") pod \"glance-600c-account-create-update-vq9k8\" (UID: \"39a1e2aa-8e90-4d50-aa8a-905474945f53\") " pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.386413 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqsn8\" (UniqueName: \"kubernetes.io/projected/39a1e2aa-8e90-4d50-aa8a-905474945f53-kube-api-access-wqsn8\") pod \"glance-600c-account-create-update-vq9k8\" (UID: \"39a1e2aa-8e90-4d50-aa8a-905474945f53\") " pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.453171 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-8jt54" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.484146 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.900551 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-8jt54"] Nov 26 13:41:51 crc kubenswrapper[4747]: I1126 13:41:51.944230 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-600c-account-create-update-vq9k8"] Nov 26 13:41:51 crc kubenswrapper[4747]: W1126 13:41:51.960099 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39a1e2aa_8e90_4d50_aa8a_905474945f53.slice/crio-aee02eef9ed2d0dffcef6c4b410453827376e050641ac6d7fcac799f94826e56 WatchSource:0}: Error finding container aee02eef9ed2d0dffcef6c4b410453827376e050641ac6d7fcac799f94826e56: Status 404 returned error can't find the container with id aee02eef9ed2d0dffcef6c4b410453827376e050641ac6d7fcac799f94826e56 Nov 26 13:41:52 crc kubenswrapper[4747]: I1126 13:41:52.817589 4747 generic.go:334] "Generic (PLEG): container finished" podID="fb75f03c-de62-477e-bcc4-f4eb2ea7b03d" containerID="01732291b9e401883f26f8f02a1ea169a1a5ad478ec2596bd7f585030f406e2a" exitCode=0 Nov 26 13:41:52 crc kubenswrapper[4747]: I1126 13:41:52.817711 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-8jt54" event={"ID":"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d","Type":"ContainerDied","Data":"01732291b9e401883f26f8f02a1ea169a1a5ad478ec2596bd7f585030f406e2a"} Nov 26 13:41:52 crc kubenswrapper[4747]: I1126 13:41:52.818323 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-8jt54" event={"ID":"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d","Type":"ContainerStarted","Data":"3023eaa42b503648371ae04f53528f6bcb8e43c70daf3850f426bb8473e8b03f"} Nov 26 13:41:52 crc kubenswrapper[4747]: I1126 13:41:52.821795 4747 generic.go:334] "Generic (PLEG): container finished" podID="39a1e2aa-8e90-4d50-aa8a-905474945f53" containerID="98601232955ea35f5e0f036578ba73b478105385469aea271d3951341a896da2" exitCode=0 Nov 26 13:41:52 crc kubenswrapper[4747]: I1126 13:41:52.821859 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" event={"ID":"39a1e2aa-8e90-4d50-aa8a-905474945f53","Type":"ContainerDied","Data":"98601232955ea35f5e0f036578ba73b478105385469aea271d3951341a896da2"} Nov 26 13:41:52 crc kubenswrapper[4747]: I1126 13:41:52.821903 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" event={"ID":"39a1e2aa-8e90-4d50-aa8a-905474945f53","Type":"ContainerStarted","Data":"aee02eef9ed2d0dffcef6c4b410453827376e050641ac6d7fcac799f94826e56"} Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.227687 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.232161 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-8jt54" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.305006 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39a1e2aa-8e90-4d50-aa8a-905474945f53-operator-scripts\") pod \"39a1e2aa-8e90-4d50-aa8a-905474945f53\" (UID: \"39a1e2aa-8e90-4d50-aa8a-905474945f53\") " Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.305225 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqsn8\" (UniqueName: \"kubernetes.io/projected/39a1e2aa-8e90-4d50-aa8a-905474945f53-kube-api-access-wqsn8\") pod \"39a1e2aa-8e90-4d50-aa8a-905474945f53\" (UID: \"39a1e2aa-8e90-4d50-aa8a-905474945f53\") " Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.305288 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-operator-scripts\") pod \"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d\" (UID: \"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d\") " Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.305361 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx5pw\" (UniqueName: \"kubernetes.io/projected/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-kube-api-access-zx5pw\") pod \"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d\" (UID: \"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d\") " Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.306382 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a1e2aa-8e90-4d50-aa8a-905474945f53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39a1e2aa-8e90-4d50-aa8a-905474945f53" (UID: "39a1e2aa-8e90-4d50-aa8a-905474945f53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.306930 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb75f03c-de62-477e-bcc4-f4eb2ea7b03d" (UID: "fb75f03c-de62-477e-bcc4-f4eb2ea7b03d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.310837 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-kube-api-access-zx5pw" (OuterVolumeSpecName: "kube-api-access-zx5pw") pod "fb75f03c-de62-477e-bcc4-f4eb2ea7b03d" (UID: "fb75f03c-de62-477e-bcc4-f4eb2ea7b03d"). InnerVolumeSpecName "kube-api-access-zx5pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.310862 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a1e2aa-8e90-4d50-aa8a-905474945f53-kube-api-access-wqsn8" (OuterVolumeSpecName: "kube-api-access-wqsn8") pod "39a1e2aa-8e90-4d50-aa8a-905474945f53" (UID: "39a1e2aa-8e90-4d50-aa8a-905474945f53"). InnerVolumeSpecName "kube-api-access-wqsn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.407592 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39a1e2aa-8e90-4d50-aa8a-905474945f53-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.407664 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqsn8\" (UniqueName: \"kubernetes.io/projected/39a1e2aa-8e90-4d50-aa8a-905474945f53-kube-api-access-wqsn8\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.407696 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.407708 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx5pw\" (UniqueName: \"kubernetes.io/projected/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d-kube-api-access-zx5pw\") on node \"crc\" DevicePath \"\"" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.845679 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" event={"ID":"39a1e2aa-8e90-4d50-aa8a-905474945f53","Type":"ContainerDied","Data":"aee02eef9ed2d0dffcef6c4b410453827376e050641ac6d7fcac799f94826e56"} Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.845740 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aee02eef9ed2d0dffcef6c4b410453827376e050641ac6d7fcac799f94826e56" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.845699 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-600c-account-create-update-vq9k8" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.847973 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-8jt54" event={"ID":"fb75f03c-de62-477e-bcc4-f4eb2ea7b03d","Type":"ContainerDied","Data":"3023eaa42b503648371ae04f53528f6bcb8e43c70daf3850f426bb8473e8b03f"} Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.848015 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3023eaa42b503648371ae04f53528f6bcb8e43c70daf3850f426bb8473e8b03f" Nov 26 13:41:54 crc kubenswrapper[4747]: I1126 13:41:54.848109 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-8jt54" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.142516 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5ng27"] Nov 26 13:41:56 crc kubenswrapper[4747]: E1126 13:41:56.143245 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a1e2aa-8e90-4d50-aa8a-905474945f53" containerName="mariadb-account-create-update" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.143266 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a1e2aa-8e90-4d50-aa8a-905474945f53" containerName="mariadb-account-create-update" Nov 26 13:41:56 crc kubenswrapper[4747]: E1126 13:41:56.143289 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb75f03c-de62-477e-bcc4-f4eb2ea7b03d" containerName="mariadb-database-create" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.143298 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb75f03c-de62-477e-bcc4-f4eb2ea7b03d" containerName="mariadb-database-create" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.143488 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb75f03c-de62-477e-bcc4-f4eb2ea7b03d" containerName="mariadb-database-create" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.143523 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="39a1e2aa-8e90-4d50-aa8a-905474945f53" containerName="mariadb-account-create-update" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.145024 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.156863 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5ng27"] Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.233756 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-utilities\") pod \"community-operators-5ng27\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.233962 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsb5k\" (UniqueName: \"kubernetes.io/projected/1662e982-2bce-4b21-a9ad-320473e84031-kube-api-access-hsb5k\") pod \"community-operators-5ng27\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.234025 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-catalog-content\") pod \"community-operators-5ng27\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.335622 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-utilities\") pod \"community-operators-5ng27\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.335790 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsb5k\" (UniqueName: \"kubernetes.io/projected/1662e982-2bce-4b21-a9ad-320473e84031-kube-api-access-hsb5k\") pod \"community-operators-5ng27\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.335830 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-catalog-content\") pod \"community-operators-5ng27\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.336167 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-utilities\") pod \"community-operators-5ng27\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.336374 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-catalog-content\") pod \"community-operators-5ng27\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.358583 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsb5k\" (UniqueName: \"kubernetes.io/projected/1662e982-2bce-4b21-a9ad-320473e84031-kube-api-access-hsb5k\") pod \"community-operators-5ng27\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.442897 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-ql9bd"] Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.443765 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.445969 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-spjq2" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.451821 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-ql9bd"] Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.452368 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.476286 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.538011 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv47m\" (UniqueName: \"kubernetes.io/projected/d52d49d1-bd23-4029-b3cf-2375a6785d36-kube-api-access-fv47m\") pod \"glance-db-sync-ql9bd\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.538501 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-config-data\") pod \"glance-db-sync-ql9bd\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.538548 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-db-sync-config-data\") pod \"glance-db-sync-ql9bd\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.639398 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv47m\" (UniqueName: \"kubernetes.io/projected/d52d49d1-bd23-4029-b3cf-2375a6785d36-kube-api-access-fv47m\") pod \"glance-db-sync-ql9bd\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.639471 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-config-data\") pod \"glance-db-sync-ql9bd\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.639501 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-db-sync-config-data\") pod \"glance-db-sync-ql9bd\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.644596 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-config-data\") pod \"glance-db-sync-ql9bd\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.647995 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-db-sync-config-data\") pod \"glance-db-sync-ql9bd\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.684040 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv47m\" (UniqueName: \"kubernetes.io/projected/d52d49d1-bd23-4029-b3cf-2375a6785d36-kube-api-access-fv47m\") pod \"glance-db-sync-ql9bd\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.759541 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:41:56 crc kubenswrapper[4747]: I1126 13:41:56.959122 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5ng27"] Nov 26 13:41:56 crc kubenswrapper[4747]: W1126 13:41:56.963201 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1662e982_2bce_4b21_a9ad_320473e84031.slice/crio-e0f72046998b89ddf770ac3f700a888cecc3d350d3a9a7678ff1781cc239a6d6 WatchSource:0}: Error finding container e0f72046998b89ddf770ac3f700a888cecc3d350d3a9a7678ff1781cc239a6d6: Status 404 returned error can't find the container with id e0f72046998b89ddf770ac3f700a888cecc3d350d3a9a7678ff1781cc239a6d6 Nov 26 13:41:57 crc kubenswrapper[4747]: I1126 13:41:57.177731 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-ql9bd"] Nov 26 13:41:57 crc kubenswrapper[4747]: W1126 13:41:57.190843 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd52d49d1_bd23_4029_b3cf_2375a6785d36.slice/crio-049e34654904be96ff582e065453d7dbfded6be97d688badacb49a69082afefd WatchSource:0}: Error finding container 049e34654904be96ff582e065453d7dbfded6be97d688badacb49a69082afefd: Status 404 returned error can't find the container with id 049e34654904be96ff582e065453d7dbfded6be97d688badacb49a69082afefd Nov 26 13:41:57 crc kubenswrapper[4747]: E1126 13:41:57.521093 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Nov 26 13:41:57 crc kubenswrapper[4747]: I1126 13:41:57.879122 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-ql9bd" event={"ID":"d52d49d1-bd23-4029-b3cf-2375a6785d36","Type":"ContainerStarted","Data":"517f5654426e06093bd2d02d69c46f756ccd8f31ceb2b6165c867e1cf30023b0"} Nov 26 13:41:57 crc kubenswrapper[4747]: I1126 13:41:57.879478 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-ql9bd" event={"ID":"d52d49d1-bd23-4029-b3cf-2375a6785d36","Type":"ContainerStarted","Data":"049e34654904be96ff582e065453d7dbfded6be97d688badacb49a69082afefd"} Nov 26 13:41:57 crc kubenswrapper[4747]: I1126 13:41:57.880343 4747 generic.go:334] "Generic (PLEG): container finished" podID="1662e982-2bce-4b21-a9ad-320473e84031" containerID="94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e" exitCode=0 Nov 26 13:41:57 crc kubenswrapper[4747]: I1126 13:41:57.880416 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ng27" event={"ID":"1662e982-2bce-4b21-a9ad-320473e84031","Type":"ContainerDied","Data":"94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e"} Nov 26 13:41:57 crc kubenswrapper[4747]: I1126 13:41:57.880444 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ng27" event={"ID":"1662e982-2bce-4b21-a9ad-320473e84031","Type":"ContainerStarted","Data":"e0f72046998b89ddf770ac3f700a888cecc3d350d3a9a7678ff1781cc239a6d6"} Nov 26 13:41:57 crc kubenswrapper[4747]: I1126 13:41:57.882259 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 13:41:57 crc kubenswrapper[4747]: I1126 13:41:57.902812 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-ql9bd" podStartSLOduration=1.9027930849999999 podStartE2EDuration="1.902793085s" podCreationTimestamp="2025-11-26 13:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:41:57.899687827 +0000 UTC m=+1604.885998842" watchObservedRunningTime="2025-11-26 13:41:57.902793085 +0000 UTC m=+1604.889104110" Nov 26 13:41:59 crc kubenswrapper[4747]: I1126 13:41:59.900044 4747 generic.go:334] "Generic (PLEG): container finished" podID="1662e982-2bce-4b21-a9ad-320473e84031" containerID="b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18" exitCode=0 Nov 26 13:41:59 crc kubenswrapper[4747]: I1126 13:41:59.900156 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ng27" event={"ID":"1662e982-2bce-4b21-a9ad-320473e84031","Type":"ContainerDied","Data":"b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18"} Nov 26 13:42:00 crc kubenswrapper[4747]: I1126 13:42:00.912949 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ng27" event={"ID":"1662e982-2bce-4b21-a9ad-320473e84031","Type":"ContainerStarted","Data":"90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68"} Nov 26 13:42:00 crc kubenswrapper[4747]: I1126 13:42:00.915464 4747 generic.go:334] "Generic (PLEG): container finished" podID="d52d49d1-bd23-4029-b3cf-2375a6785d36" containerID="517f5654426e06093bd2d02d69c46f756ccd8f31ceb2b6165c867e1cf30023b0" exitCode=0 Nov 26 13:42:00 crc kubenswrapper[4747]: I1126 13:42:00.915753 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-ql9bd" event={"ID":"d52d49d1-bd23-4029-b3cf-2375a6785d36","Type":"ContainerDied","Data":"517f5654426e06093bd2d02d69c46f756ccd8f31ceb2b6165c867e1cf30023b0"} Nov 26 13:42:00 crc kubenswrapper[4747]: I1126 13:42:00.948434 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5ng27" podStartSLOduration=2.322918346 podStartE2EDuration="4.948410531s" podCreationTimestamp="2025-11-26 13:41:56 +0000 UTC" firstStartedPulling="2025-11-26 13:41:57.881979935 +0000 UTC m=+1604.868290970" lastFinishedPulling="2025-11-26 13:42:00.50747214 +0000 UTC m=+1607.493783155" observedRunningTime="2025-11-26 13:42:00.942542174 +0000 UTC m=+1607.928853269" watchObservedRunningTime="2025-11-26 13:42:00.948410531 +0000 UTC m=+1607.934721566" Nov 26 13:42:01 crc kubenswrapper[4747]: I1126 13:42:01.798506 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:42:01 crc kubenswrapper[4747]: E1126 13:42:01.799038 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.237750 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.423481 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-config-data\") pod \"d52d49d1-bd23-4029-b3cf-2375a6785d36\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.423943 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-db-sync-config-data\") pod \"d52d49d1-bd23-4029-b3cf-2375a6785d36\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.423977 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv47m\" (UniqueName: \"kubernetes.io/projected/d52d49d1-bd23-4029-b3cf-2375a6785d36-kube-api-access-fv47m\") pod \"d52d49d1-bd23-4029-b3cf-2375a6785d36\" (UID: \"d52d49d1-bd23-4029-b3cf-2375a6785d36\") " Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.429923 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d52d49d1-bd23-4029-b3cf-2375a6785d36" (UID: "d52d49d1-bd23-4029-b3cf-2375a6785d36"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.430417 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52d49d1-bd23-4029-b3cf-2375a6785d36-kube-api-access-fv47m" (OuterVolumeSpecName: "kube-api-access-fv47m") pod "d52d49d1-bd23-4029-b3cf-2375a6785d36" (UID: "d52d49d1-bd23-4029-b3cf-2375a6785d36"). InnerVolumeSpecName "kube-api-access-fv47m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.468895 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-config-data" (OuterVolumeSpecName: "config-data") pod "d52d49d1-bd23-4029-b3cf-2375a6785d36" (UID: "d52d49d1-bd23-4029-b3cf-2375a6785d36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.525485 4747 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.525529 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv47m\" (UniqueName: \"kubernetes.io/projected/d52d49d1-bd23-4029-b3cf-2375a6785d36-kube-api-access-fv47m\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.525543 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52d49d1-bd23-4029-b3cf-2375a6785d36-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.943544 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-ql9bd" event={"ID":"d52d49d1-bd23-4029-b3cf-2375a6785d36","Type":"ContainerDied","Data":"049e34654904be96ff582e065453d7dbfded6be97d688badacb49a69082afefd"} Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.943847 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="049e34654904be96ff582e065453d7dbfded6be97d688badacb49a69082afefd" Nov 26 13:42:02 crc kubenswrapper[4747]: I1126 13:42:02.943623 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-ql9bd" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.126133 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:42:04 crc kubenswrapper[4747]: E1126 13:42:04.127502 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52d49d1-bd23-4029-b3cf-2375a6785d36" containerName="glance-db-sync" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.127537 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52d49d1-bd23-4029-b3cf-2375a6785d36" containerName="glance-db-sync" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.127673 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52d49d1-bd23-4029-b3cf-2375a6785d36" containerName="glance-db-sync" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.128467 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.130718 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.130783 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-spjq2" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.131436 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.141816 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.250738 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-nvme\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.250799 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-scripts\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.250828 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.250860 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-lib-modules\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.250881 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-run\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.250922 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw96m\" (UniqueName: \"kubernetes.io/projected/003133a2-9fe4-4567-ac85-b99fbee30003-kube-api-access-sw96m\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.250966 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.250987 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.251007 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-dev\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.251027 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-httpd-run\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.251044 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-sys\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.251099 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.251118 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-config-data\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.251149 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-logs\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.351912 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw96m\" (UniqueName: \"kubernetes.io/projected/003133a2-9fe4-4567-ac85-b99fbee30003-kube-api-access-sw96m\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.351989 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352017 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352044 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-dev\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352088 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-httpd-run\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352109 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-sys\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352112 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352128 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352169 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-config-data\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352188 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352205 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-logs\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352248 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-nvme\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352256 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-sys\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352259 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-dev\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352274 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-scripts\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352366 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352429 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-lib-modules\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352423 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-nvme\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352462 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-run\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352481 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352492 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-run\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352427 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352518 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-lib-modules\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.352712 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-logs\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.353107 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-httpd-run\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.357985 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-scripts\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.358496 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-config-data\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.372199 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw96m\" (UniqueName: \"kubernetes.io/projected/003133a2-9fe4-4567-ac85-b99fbee30003-kube-api-access-sw96m\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.374869 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.379423 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.442222 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:04 crc kubenswrapper[4747]: W1126 13:42:04.898159 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod003133a2_9fe4_4567_ac85_b99fbee30003.slice/crio-594de66c98c3b338671e14f4891810d5241bf83836dfa28f7de446fdbbe1d328 WatchSource:0}: Error finding container 594de66c98c3b338671e14f4891810d5241bf83836dfa28f7de446fdbbe1d328: Status 404 returned error can't find the container with id 594de66c98c3b338671e14f4891810d5241bf83836dfa28f7de446fdbbe1d328 Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.902742 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:42:04 crc kubenswrapper[4747]: I1126 13:42:04.963243 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"003133a2-9fe4-4567-ac85-b99fbee30003","Type":"ContainerStarted","Data":"594de66c98c3b338671e14f4891810d5241bf83836dfa28f7de446fdbbe1d328"} Nov 26 13:42:05 crc kubenswrapper[4747]: I1126 13:42:05.974531 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"003133a2-9fe4-4567-ac85-b99fbee30003","Type":"ContainerStarted","Data":"30aa9effdd4dc750a1ba35376d9cac13db866c8d6cc87d8482be5d0932cf2d29"} Nov 26 13:42:06 crc kubenswrapper[4747]: I1126 13:42:06.477092 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:42:06 crc kubenswrapper[4747]: I1126 13:42:06.477438 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:42:06 crc kubenswrapper[4747]: I1126 13:42:06.521640 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:42:06 crc kubenswrapper[4747]: I1126 13:42:06.984329 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"003133a2-9fe4-4567-ac85-b99fbee30003","Type":"ContainerStarted","Data":"96e48c7176bdb25a9048d4d6753f8603b74f6a8bb5c99000150ed2b968451cad"} Nov 26 13:42:07 crc kubenswrapper[4747]: I1126 13:42:07.010218 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=3.01018827 podStartE2EDuration="3.01018827s" podCreationTimestamp="2025-11-26 13:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:42:07.008883068 +0000 UTC m=+1613.995194073" watchObservedRunningTime="2025-11-26 13:42:07.01018827 +0000 UTC m=+1613.996499285" Nov 26 13:42:07 crc kubenswrapper[4747]: I1126 13:42:07.040806 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:42:07 crc kubenswrapper[4747]: I1126 13:42:07.097017 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5ng27"] Nov 26 13:42:07 crc kubenswrapper[4747]: E1126 13:42:07.683002 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Nov 26 13:42:08 crc kubenswrapper[4747]: I1126 13:42:08.997423 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5ng27" podUID="1662e982-2bce-4b21-a9ad-320473e84031" containerName="registry-server" containerID="cri-o://90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68" gracePeriod=2 Nov 26 13:42:09 crc kubenswrapper[4747]: I1126 13:42:09.450287 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:42:09 crc kubenswrapper[4747]: I1126 13:42:09.537227 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-catalog-content\") pod \"1662e982-2bce-4b21-a9ad-320473e84031\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " Nov 26 13:42:09 crc kubenswrapper[4747]: I1126 13:42:09.537338 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-utilities\") pod \"1662e982-2bce-4b21-a9ad-320473e84031\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " Nov 26 13:42:09 crc kubenswrapper[4747]: I1126 13:42:09.537449 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsb5k\" (UniqueName: \"kubernetes.io/projected/1662e982-2bce-4b21-a9ad-320473e84031-kube-api-access-hsb5k\") pod \"1662e982-2bce-4b21-a9ad-320473e84031\" (UID: \"1662e982-2bce-4b21-a9ad-320473e84031\") " Nov 26 13:42:09 crc kubenswrapper[4747]: I1126 13:42:09.538704 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-utilities" (OuterVolumeSpecName: "utilities") pod "1662e982-2bce-4b21-a9ad-320473e84031" (UID: "1662e982-2bce-4b21-a9ad-320473e84031"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:42:09 crc kubenswrapper[4747]: I1126 13:42:09.543189 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1662e982-2bce-4b21-a9ad-320473e84031-kube-api-access-hsb5k" (OuterVolumeSpecName: "kube-api-access-hsb5k") pod "1662e982-2bce-4b21-a9ad-320473e84031" (UID: "1662e982-2bce-4b21-a9ad-320473e84031"). InnerVolumeSpecName "kube-api-access-hsb5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:09 crc kubenswrapper[4747]: I1126 13:42:09.602145 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1662e982-2bce-4b21-a9ad-320473e84031" (UID: "1662e982-2bce-4b21-a9ad-320473e84031"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:42:09 crc kubenswrapper[4747]: I1126 13:42:09.639721 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:09 crc kubenswrapper[4747]: I1126 13:42:09.639763 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1662e982-2bce-4b21-a9ad-320473e84031-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:09 crc kubenswrapper[4747]: I1126 13:42:09.639773 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsb5k\" (UniqueName: \"kubernetes.io/projected/1662e982-2bce-4b21-a9ad-320473e84031-kube-api-access-hsb5k\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.012032 4747 generic.go:334] "Generic (PLEG): container finished" podID="1662e982-2bce-4b21-a9ad-320473e84031" containerID="90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68" exitCode=0 Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.012125 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ng27" event={"ID":"1662e982-2bce-4b21-a9ad-320473e84031","Type":"ContainerDied","Data":"90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68"} Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.012203 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ng27" event={"ID":"1662e982-2bce-4b21-a9ad-320473e84031","Type":"ContainerDied","Data":"e0f72046998b89ddf770ac3f700a888cecc3d350d3a9a7678ff1781cc239a6d6"} Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.012214 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ng27" Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.012242 4747 scope.go:117] "RemoveContainer" containerID="90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68" Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.045629 4747 scope.go:117] "RemoveContainer" containerID="b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18" Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.045778 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5ng27"] Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.056526 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5ng27"] Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.082198 4747 scope.go:117] "RemoveContainer" containerID="94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e" Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.107282 4747 scope.go:117] "RemoveContainer" containerID="90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68" Nov 26 13:42:10 crc kubenswrapper[4747]: E1126 13:42:10.107803 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68\": container with ID starting with 90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68 not found: ID does not exist" containerID="90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68" Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.107849 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68"} err="failed to get container status \"90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68\": rpc error: code = NotFound desc = could not find container \"90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68\": container with ID starting with 90dc86882a46dbf8638c79d945c7969e62ad45f6301bb36ef5098140c5a12b68 not found: ID does not exist" Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.107990 4747 scope.go:117] "RemoveContainer" containerID="b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18" Nov 26 13:42:10 crc kubenswrapper[4747]: E1126 13:42:10.108439 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18\": container with ID starting with b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18 not found: ID does not exist" containerID="b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18" Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.108464 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18"} err="failed to get container status \"b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18\": rpc error: code = NotFound desc = could not find container \"b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18\": container with ID starting with b2fc37d4a99cfaa9da3802deb804a3ae8b616e1202c8cd122e62a26cc65c1c18 not found: ID does not exist" Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.108485 4747 scope.go:117] "RemoveContainer" containerID="94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e" Nov 26 13:42:10 crc kubenswrapper[4747]: E1126 13:42:10.108807 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e\": container with ID starting with 94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e not found: ID does not exist" containerID="94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e" Nov 26 13:42:10 crc kubenswrapper[4747]: I1126 13:42:10.108829 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e"} err="failed to get container status \"94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e\": rpc error: code = NotFound desc = could not find container \"94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e\": container with ID starting with 94c2a88e8a349fc2b84e955e686338cd5d111b1119011e48f93fa081fb8cab8e not found: ID does not exist" Nov 26 13:42:11 crc kubenswrapper[4747]: I1126 13:42:11.814871 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1662e982-2bce-4b21-a9ad-320473e84031" path="/var/lib/kubelet/pods/1662e982-2bce-4b21-a9ad-320473e84031/volumes" Nov 26 13:42:13 crc kubenswrapper[4747]: I1126 13:42:13.803074 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:42:13 crc kubenswrapper[4747]: E1126 13:42:13.803444 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:42:14 crc kubenswrapper[4747]: I1126 13:42:14.443482 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:14 crc kubenswrapper[4747]: I1126 13:42:14.443808 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:14 crc kubenswrapper[4747]: I1126 13:42:14.475421 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:14 crc kubenswrapper[4747]: I1126 13:42:14.498707 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:15 crc kubenswrapper[4747]: I1126 13:42:15.055294 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:15 crc kubenswrapper[4747]: I1126 13:42:15.055346 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:17 crc kubenswrapper[4747]: I1126 13:42:17.072754 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:17 crc kubenswrapper[4747]: I1126 13:42:17.073916 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:42:17 crc kubenswrapper[4747]: I1126 13:42:17.081816 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:19 crc kubenswrapper[4747]: I1126 13:42:19.941872 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 26 13:42:19 crc kubenswrapper[4747]: E1126 13:42:19.942865 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1662e982-2bce-4b21-a9ad-320473e84031" containerName="extract-content" Nov 26 13:42:19 crc kubenswrapper[4747]: I1126 13:42:19.942883 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1662e982-2bce-4b21-a9ad-320473e84031" containerName="extract-content" Nov 26 13:42:19 crc kubenswrapper[4747]: E1126 13:42:19.942908 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1662e982-2bce-4b21-a9ad-320473e84031" containerName="extract-utilities" Nov 26 13:42:19 crc kubenswrapper[4747]: I1126 13:42:19.942916 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1662e982-2bce-4b21-a9ad-320473e84031" containerName="extract-utilities" Nov 26 13:42:19 crc kubenswrapper[4747]: E1126 13:42:19.942934 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1662e982-2bce-4b21-a9ad-320473e84031" containerName="registry-server" Nov 26 13:42:19 crc kubenswrapper[4747]: I1126 13:42:19.942943 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1662e982-2bce-4b21-a9ad-320473e84031" containerName="registry-server" Nov 26 13:42:19 crc kubenswrapper[4747]: I1126 13:42:19.943133 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1662e982-2bce-4b21-a9ad-320473e84031" containerName="registry-server" Nov 26 13:42:19 crc kubenswrapper[4747]: I1126 13:42:19.944017 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:19 crc kubenswrapper[4747]: I1126 13:42:19.957308 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 26 13:42:19 crc kubenswrapper[4747]: I1126 13:42:19.968086 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:42:19 crc kubenswrapper[4747]: I1126 13:42:19.969926 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:19 crc kubenswrapper[4747]: I1126 13:42:19.993374 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.017518 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-lib-modules\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.017856 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-scripts\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.018013 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.018164 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vqqw\" (UniqueName: \"kubernetes.io/projected/b4679b84-9557-42f8-833d-f4322b1f321e-kube-api-access-2vqqw\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.018277 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.018412 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.018524 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-run\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.018652 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.018764 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-logs\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.018878 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-config-data\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.018972 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-dev\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.019146 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.019306 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-lib-modules\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.019440 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-httpd-run\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.019569 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-scripts\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.019705 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kh9b\" (UniqueName: \"kubernetes.io/projected/11899e0a-c374-4c34-b078-a552b1798afe-kube-api-access-5kh9b\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.019820 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-config-data\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.019916 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.019991 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.020150 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-dev\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.020233 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-sys\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.020318 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-logs\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.020404 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-sys\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.020551 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-nvme\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.020643 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-nvme\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.020723 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-run\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.020821 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.020945 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-httpd-run\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122256 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-run\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122314 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122335 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-logs\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122360 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-dev\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122384 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-config-data\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122417 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122442 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-lib-modules\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122433 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-run\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122479 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-httpd-run\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122501 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-scripts\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122532 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kh9b\" (UniqueName: \"kubernetes.io/projected/11899e0a-c374-4c34-b078-a552b1798afe-kube-api-access-5kh9b\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122557 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-dev\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122581 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122938 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-logs\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123047 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-httpd-run\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122531 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-lib-modules\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122556 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-config-data\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123185 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123224 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123266 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-dev\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.122494 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123296 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-sys\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123416 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-logs\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123445 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-sys\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123449 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123499 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-sys\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123508 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-nvme\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123517 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123528 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-sys\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123537 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-nvme\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123578 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-nvme\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123592 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-run\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123601 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-dev\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123640 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123724 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-run\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123727 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-httpd-run\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123724 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") device mount path \"/mnt/openstack/pv04\"" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123626 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-nvme\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123915 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-logs\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.123993 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-lib-modules\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.124029 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-scripts\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.124076 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.124126 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vqqw\" (UniqueName: \"kubernetes.io/projected/b4679b84-9557-42f8-833d-f4322b1f321e-kube-api-access-2vqqw\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.124158 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-httpd-run\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.124170 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.124204 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.124380 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.124962 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-lib-modules\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.125338 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.127202 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.133466 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-config-data\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.152193 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-scripts\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.152354 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-config-data\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.153241 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kh9b\" (UniqueName: \"kubernetes.io/projected/11899e0a-c374-4c34-b078-a552b1798afe-kube-api-access-5kh9b\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.167198 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-scripts\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.174796 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vqqw\" (UniqueName: \"kubernetes.io/projected/b4679b84-9557-42f8-833d-f4322b1f321e-kube-api-access-2vqqw\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.182676 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.216248 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.236067 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-2\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.246155 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-1\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.261689 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.288889 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.708663 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 26 13:42:20 crc kubenswrapper[4747]: I1126 13:42:20.771271 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:42:20 crc kubenswrapper[4747]: W1126 13:42:20.798346 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4679b84_9557_42f8_833d_f4322b1f321e.slice/crio-7489507f8800df8d12254a9c7067762741d85b376db280b44253010969ab450e WatchSource:0}: Error finding container 7489507f8800df8d12254a9c7067762741d85b376db280b44253010969ab450e: Status 404 returned error can't find the container with id 7489507f8800df8d12254a9c7067762741d85b376db280b44253010969ab450e Nov 26 13:42:21 crc kubenswrapper[4747]: I1126 13:42:21.112653 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"11899e0a-c374-4c34-b078-a552b1798afe","Type":"ContainerStarted","Data":"ce63d4a4f24b4a553a3437340f07876e7ad07381bd6937dae56958b7669d33cb"} Nov 26 13:42:21 crc kubenswrapper[4747]: I1126 13:42:21.113313 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"11899e0a-c374-4c34-b078-a552b1798afe","Type":"ContainerStarted","Data":"1b65802f94483e52f20a2436eeeb6cffd1b711c138611260bc92d31d2111881f"} Nov 26 13:42:21 crc kubenswrapper[4747]: I1126 13:42:21.113330 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"11899e0a-c374-4c34-b078-a552b1798afe","Type":"ContainerStarted","Data":"f513711955bec32e89cab2c669b3551b94af1e055369ba42b654eba04de11777"} Nov 26 13:42:21 crc kubenswrapper[4747]: I1126 13:42:21.115289 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"b4679b84-9557-42f8-833d-f4322b1f321e","Type":"ContainerStarted","Data":"50a43d08549703bdd0ac8aba8626e15e46ce9b614642243a7ec615067d180f06"} Nov 26 13:42:21 crc kubenswrapper[4747]: I1126 13:42:21.115328 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"b4679b84-9557-42f8-833d-f4322b1f321e","Type":"ContainerStarted","Data":"e1f869560d8b0c1e1098d0d26f727eae5662251633a3e5882106e2472d30b36e"} Nov 26 13:42:21 crc kubenswrapper[4747]: I1126 13:42:21.115341 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"b4679b84-9557-42f8-833d-f4322b1f321e","Type":"ContainerStarted","Data":"7489507f8800df8d12254a9c7067762741d85b376db280b44253010969ab450e"} Nov 26 13:42:21 crc kubenswrapper[4747]: I1126 13:42:21.136628 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-2" podStartSLOduration=3.136602564 podStartE2EDuration="3.136602564s" podCreationTimestamp="2025-11-26 13:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:42:21.136251765 +0000 UTC m=+1628.122562780" watchObservedRunningTime="2025-11-26 13:42:21.136602564 +0000 UTC m=+1628.122913599" Nov 26 13:42:21 crc kubenswrapper[4747]: I1126 13:42:21.170683 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=3.170655017 podStartE2EDuration="3.170655017s" podCreationTimestamp="2025-11-26 13:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:42:21.163179902 +0000 UTC m=+1628.149490927" watchObservedRunningTime="2025-11-26 13:42:21.170655017 +0000 UTC m=+1628.156966072" Nov 26 13:42:27 crc kubenswrapper[4747]: I1126 13:42:27.798433 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:42:27 crc kubenswrapper[4747]: E1126 13:42:27.800472 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:42:30 crc kubenswrapper[4747]: I1126 13:42:30.262829 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:30 crc kubenswrapper[4747]: I1126 13:42:30.263721 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:30 crc kubenswrapper[4747]: I1126 13:42:30.290042 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:30 crc kubenswrapper[4747]: I1126 13:42:30.290213 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:30 crc kubenswrapper[4747]: I1126 13:42:30.290413 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:30 crc kubenswrapper[4747]: I1126 13:42:30.318478 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:30 crc kubenswrapper[4747]: I1126 13:42:30.327470 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:30 crc kubenswrapper[4747]: I1126 13:42:30.331623 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:31 crc kubenswrapper[4747]: I1126 13:42:31.199207 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:31 crc kubenswrapper[4747]: I1126 13:42:31.199261 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:31 crc kubenswrapper[4747]: I1126 13:42:31.199475 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:31 crc kubenswrapper[4747]: I1126 13:42:31.199492 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:33 crc kubenswrapper[4747]: I1126 13:42:33.214343 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:42:33 crc kubenswrapper[4747]: I1126 13:42:33.214709 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:42:33 crc kubenswrapper[4747]: I1126 13:42:33.261800 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:33 crc kubenswrapper[4747]: I1126 13:42:33.262382 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:42:33 crc kubenswrapper[4747]: I1126 13:42:33.265089 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:33 crc kubenswrapper[4747]: I1126 13:42:33.365734 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:33 crc kubenswrapper[4747]: I1126 13:42:33.365793 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:34 crc kubenswrapper[4747]: I1126 13:42:34.497927 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 26 13:42:34 crc kubenswrapper[4747]: I1126 13:42:34.512486 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:42:35 crc kubenswrapper[4747]: I1126 13:42:35.228738 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-2" podUID="11899e0a-c374-4c34-b078-a552b1798afe" containerName="glance-log" containerID="cri-o://1b65802f94483e52f20a2436eeeb6cffd1b711c138611260bc92d31d2111881f" gracePeriod=30 Nov 26 13:42:35 crc kubenswrapper[4747]: I1126 13:42:35.228932 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" containerName="glance-log" containerID="cri-o://e1f869560d8b0c1e1098d0d26f727eae5662251633a3e5882106e2472d30b36e" gracePeriod=30 Nov 26 13:42:35 crc kubenswrapper[4747]: I1126 13:42:35.229329 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-2" podUID="11899e0a-c374-4c34-b078-a552b1798afe" containerName="glance-httpd" containerID="cri-o://ce63d4a4f24b4a553a3437340f07876e7ad07381bd6937dae56958b7669d33cb" gracePeriod=30 Nov 26 13:42:35 crc kubenswrapper[4747]: I1126 13:42:35.229562 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" containerName="glance-httpd" containerID="cri-o://50a43d08549703bdd0ac8aba8626e15e46ce9b614642243a7ec615067d180f06" gracePeriod=30 Nov 26 13:42:35 crc kubenswrapper[4747]: I1126 13:42:35.236765 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-2" podUID="11899e0a-c374-4c34-b078-a552b1798afe" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.143:9292/healthcheck\": EOF" Nov 26 13:42:35 crc kubenswrapper[4747]: I1126 13:42:35.238613 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-2" podUID="11899e0a-c374-4c34-b078-a552b1798afe" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.143:9292/healthcheck\": EOF" Nov 26 13:42:35 crc kubenswrapper[4747]: I1126 13:42:35.240740 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-1" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.144:9292/healthcheck\": EOF" Nov 26 13:42:35 crc kubenswrapper[4747]: I1126 13:42:35.242327 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-1" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.144:9292/healthcheck\": EOF" Nov 26 13:42:36 crc kubenswrapper[4747]: I1126 13:42:36.238256 4747 generic.go:334] "Generic (PLEG): container finished" podID="11899e0a-c374-4c34-b078-a552b1798afe" containerID="1b65802f94483e52f20a2436eeeb6cffd1b711c138611260bc92d31d2111881f" exitCode=143 Nov 26 13:42:36 crc kubenswrapper[4747]: I1126 13:42:36.238326 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"11899e0a-c374-4c34-b078-a552b1798afe","Type":"ContainerDied","Data":"1b65802f94483e52f20a2436eeeb6cffd1b711c138611260bc92d31d2111881f"} Nov 26 13:42:36 crc kubenswrapper[4747]: I1126 13:42:36.240540 4747 generic.go:334] "Generic (PLEG): container finished" podID="b4679b84-9557-42f8-833d-f4322b1f321e" containerID="e1f869560d8b0c1e1098d0d26f727eae5662251633a3e5882106e2472d30b36e" exitCode=143 Nov 26 13:42:36 crc kubenswrapper[4747]: I1126 13:42:36.240569 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"b4679b84-9557-42f8-833d-f4322b1f321e","Type":"ContainerDied","Data":"e1f869560d8b0c1e1098d0d26f727eae5662251633a3e5882106e2472d30b36e"} Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.268550 4747 generic.go:334] "Generic (PLEG): container finished" podID="11899e0a-c374-4c34-b078-a552b1798afe" containerID="ce63d4a4f24b4a553a3437340f07876e7ad07381bd6937dae56958b7669d33cb" exitCode=0 Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.269331 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"11899e0a-c374-4c34-b078-a552b1798afe","Type":"ContainerDied","Data":"ce63d4a4f24b4a553a3437340f07876e7ad07381bd6937dae56958b7669d33cb"} Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.287619 4747 generic.go:334] "Generic (PLEG): container finished" podID="b4679b84-9557-42f8-833d-f4322b1f321e" containerID="50a43d08549703bdd0ac8aba8626e15e46ce9b614642243a7ec615067d180f06" exitCode=0 Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.288245 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"b4679b84-9557-42f8-833d-f4322b1f321e","Type":"ContainerDied","Data":"50a43d08549703bdd0ac8aba8626e15e46ce9b614642243a7ec615067d180f06"} Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.838084 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.842779 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.958982 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-run\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959050 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-nvme\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959085 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-lib-modules\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959125 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-scripts\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959151 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-iscsi\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959138 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-run" (OuterVolumeSpecName: "run") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959172 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959172 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959182 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-dev\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959241 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-lib-modules\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959201 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959288 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vqqw\" (UniqueName: \"kubernetes.io/projected/b4679b84-9557-42f8-833d-f4322b1f321e-kube-api-access-2vqqw\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959317 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-config-data\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959217 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-dev" (OuterVolumeSpecName: "dev") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959345 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959372 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kh9b\" (UniqueName: \"kubernetes.io/projected/11899e0a-c374-4c34-b078-a552b1798afe-kube-api-access-5kh9b\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959395 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-dev\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959328 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959416 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-sys\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959443 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-logs\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959462 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959469 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-dev" (OuterVolumeSpecName: "dev") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959483 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-nvme\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959485 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-sys" (OuterVolumeSpecName: "sys") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959502 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959528 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-config-data\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959569 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-scripts\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959601 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959622 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-run\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959638 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-sys\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959667 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-logs\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959694 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-iscsi\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959712 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-var-locks-brick\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959732 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-httpd-run\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959760 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-var-locks-brick\") pod \"11899e0a-c374-4c34-b078-a552b1798afe\" (UID: \"11899e0a-c374-4c34-b078-a552b1798afe\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959796 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-httpd-run\") pod \"b4679b84-9557-42f8-833d-f4322b1f321e\" (UID: \"b4679b84-9557-42f8-833d-f4322b1f321e\") " Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960130 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960141 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960150 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960158 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960165 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960175 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960185 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960197 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959759 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-logs" (OuterVolumeSpecName: "logs") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959779 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-run" (OuterVolumeSpecName: "run") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959793 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.959985 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960005 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960196 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960249 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960384 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-sys" (OuterVolumeSpecName: "sys") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960386 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.960413 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-logs" (OuterVolumeSpecName: "logs") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.966313 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-scripts" (OuterVolumeSpecName: "scripts") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.966346 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.966415 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11899e0a-c374-4c34-b078-a552b1798afe-kube-api-access-5kh9b" (OuterVolumeSpecName: "kube-api-access-5kh9b") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "kube-api-access-5kh9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.966422 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.966475 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance-cache") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.966823 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance-cache") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.977944 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-scripts" (OuterVolumeSpecName: "scripts") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:42:39 crc kubenswrapper[4747]: I1126 13:42:39.978415 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4679b84-9557-42f8-833d-f4322b1f321e-kube-api-access-2vqqw" (OuterVolumeSpecName: "kube-api-access-2vqqw") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "kube-api-access-2vqqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:39.999976 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-config-data" (OuterVolumeSpecName: "config-data") pod "11899e0a-c374-4c34-b078-a552b1798afe" (UID: "11899e0a-c374-4c34-b078-a552b1798afe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.012374 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-config-data" (OuterVolumeSpecName: "config-data") pod "b4679b84-9557-42f8-833d-f4322b1f321e" (UID: "b4679b84-9557-42f8-833d-f4322b1f321e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062744 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062791 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vqqw\" (UniqueName: \"kubernetes.io/projected/b4679b84-9557-42f8-833d-f4322b1f321e-kube-api-access-2vqqw\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062803 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062838 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062848 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kh9b\" (UniqueName: \"kubernetes.io/projected/11899e0a-c374-4c34-b078-a552b1798afe-kube-api-access-5kh9b\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062856 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062868 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062879 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062888 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062896 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4679b84-9557-42f8-833d-f4322b1f321e-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062907 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11899e0a-c374-4c34-b078-a552b1798afe-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062919 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062926 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062934 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062942 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062950 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062957 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b4679b84-9557-42f8-833d-f4322b1f321e-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062966 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11899e0a-c374-4c34-b078-a552b1798afe-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062979 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/11899e0a-c374-4c34-b078-a552b1798afe-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.062989 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4679b84-9557-42f8-833d-f4322b1f321e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.077876 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.080084 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.086265 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.086500 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.164590 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.164635 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.164651 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.164666 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.265836 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rjfp4"] Nov 26 13:42:40 crc kubenswrapper[4747]: E1126 13:42:40.266439 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" containerName="glance-log" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.266558 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" containerName="glance-log" Nov 26 13:42:40 crc kubenswrapper[4747]: E1126 13:42:40.266633 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" containerName="glance-httpd" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.266700 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" containerName="glance-httpd" Nov 26 13:42:40 crc kubenswrapper[4747]: E1126 13:42:40.266799 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11899e0a-c374-4c34-b078-a552b1798afe" containerName="glance-log" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.266881 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="11899e0a-c374-4c34-b078-a552b1798afe" containerName="glance-log" Nov 26 13:42:40 crc kubenswrapper[4747]: E1126 13:42:40.266996 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11899e0a-c374-4c34-b078-a552b1798afe" containerName="glance-httpd" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.267107 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="11899e0a-c374-4c34-b078-a552b1798afe" containerName="glance-httpd" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.267350 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" containerName="glance-httpd" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.267435 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="11899e0a-c374-4c34-b078-a552b1798afe" containerName="glance-log" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.267518 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="11899e0a-c374-4c34-b078-a552b1798afe" containerName="glance-httpd" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.267617 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" containerName="glance-log" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.268910 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.287790 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rjfp4"] Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.299514 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.299528 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"b4679b84-9557-42f8-833d-f4322b1f321e","Type":"ContainerDied","Data":"7489507f8800df8d12254a9c7067762741d85b376db280b44253010969ab450e"} Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.300166 4747 scope.go:117] "RemoveContainer" containerID="50a43d08549703bdd0ac8aba8626e15e46ce9b614642243a7ec615067d180f06" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.316245 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"11899e0a-c374-4c34-b078-a552b1798afe","Type":"ContainerDied","Data":"f513711955bec32e89cab2c669b3551b94af1e055369ba42b654eba04de11777"} Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.316367 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.342209 4747 scope.go:117] "RemoveContainer" containerID="e1f869560d8b0c1e1098d0d26f727eae5662251633a3e5882106e2472d30b36e" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.355693 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.368623 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.371284 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-catalog-content\") pod \"certified-operators-rjfp4\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.371358 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwtx9\" (UniqueName: \"kubernetes.io/projected/d96657a1-b26e-4040-a586-1378df6f54a0-kube-api-access-fwtx9\") pod \"certified-operators-rjfp4\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.371383 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-utilities\") pod \"certified-operators-rjfp4\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.382493 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.388118 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.390700 4747 scope.go:117] "RemoveContainer" containerID="ce63d4a4f24b4a553a3437340f07876e7ad07381bd6937dae56958b7669d33cb" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.414992 4747 scope.go:117] "RemoveContainer" containerID="1b65802f94483e52f20a2436eeeb6cffd1b711c138611260bc92d31d2111881f" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.473303 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-catalog-content\") pod \"certified-operators-rjfp4\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.473384 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwtx9\" (UniqueName: \"kubernetes.io/projected/d96657a1-b26e-4040-a586-1378df6f54a0-kube-api-access-fwtx9\") pod \"certified-operators-rjfp4\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.473591 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-utilities\") pod \"certified-operators-rjfp4\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.474112 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-catalog-content\") pod \"certified-operators-rjfp4\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.474184 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-utilities\") pod \"certified-operators-rjfp4\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.490215 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwtx9\" (UniqueName: \"kubernetes.io/projected/d96657a1-b26e-4040-a586-1378df6f54a0-kube-api-access-fwtx9\") pod \"certified-operators-rjfp4\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.593886 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.866113 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rjfp4"] Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.873826 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.874135 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="003133a2-9fe4-4567-ac85-b99fbee30003" containerName="glance-log" containerID="cri-o://30aa9effdd4dc750a1ba35376d9cac13db866c8d6cc87d8482be5d0932cf2d29" gracePeriod=30 Nov 26 13:42:40 crc kubenswrapper[4747]: I1126 13:42:40.874777 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="003133a2-9fe4-4567-ac85-b99fbee30003" containerName="glance-httpd" containerID="cri-o://96e48c7176bdb25a9048d4d6753f8603b74f6a8bb5c99000150ed2b968451cad" gracePeriod=30 Nov 26 13:42:41 crc kubenswrapper[4747]: I1126 13:42:41.326400 4747 generic.go:334] "Generic (PLEG): container finished" podID="003133a2-9fe4-4567-ac85-b99fbee30003" containerID="30aa9effdd4dc750a1ba35376d9cac13db866c8d6cc87d8482be5d0932cf2d29" exitCode=143 Nov 26 13:42:41 crc kubenswrapper[4747]: I1126 13:42:41.326452 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"003133a2-9fe4-4567-ac85-b99fbee30003","Type":"ContainerDied","Data":"30aa9effdd4dc750a1ba35376d9cac13db866c8d6cc87d8482be5d0932cf2d29"} Nov 26 13:42:41 crc kubenswrapper[4747]: I1126 13:42:41.330896 4747 generic.go:334] "Generic (PLEG): container finished" podID="d96657a1-b26e-4040-a586-1378df6f54a0" containerID="03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690" exitCode=0 Nov 26 13:42:41 crc kubenswrapper[4747]: I1126 13:42:41.330922 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjfp4" event={"ID":"d96657a1-b26e-4040-a586-1378df6f54a0","Type":"ContainerDied","Data":"03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690"} Nov 26 13:42:41 crc kubenswrapper[4747]: I1126 13:42:41.330939 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjfp4" event={"ID":"d96657a1-b26e-4040-a586-1378df6f54a0","Type":"ContainerStarted","Data":"bf9fe7eba816754ba1eda4cace08d89c0f54a59934c013a52c2a9b0664719045"} Nov 26 13:42:41 crc kubenswrapper[4747]: I1126 13:42:41.799161 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:42:41 crc kubenswrapper[4747]: E1126 13:42:41.799444 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:42:41 crc kubenswrapper[4747]: I1126 13:42:41.807667 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11899e0a-c374-4c34-b078-a552b1798afe" path="/var/lib/kubelet/pods/11899e0a-c374-4c34-b078-a552b1798afe/volumes" Nov 26 13:42:41 crc kubenswrapper[4747]: I1126 13:42:41.808476 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4679b84-9557-42f8-833d-f4322b1f321e" path="/var/lib/kubelet/pods/b4679b84-9557-42f8-833d-f4322b1f321e/volumes" Nov 26 13:42:43 crc kubenswrapper[4747]: I1126 13:42:43.358996 4747 generic.go:334] "Generic (PLEG): container finished" podID="d96657a1-b26e-4040-a586-1378df6f54a0" containerID="be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af" exitCode=0 Nov 26 13:42:43 crc kubenswrapper[4747]: I1126 13:42:43.359110 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjfp4" event={"ID":"d96657a1-b26e-4040-a586-1378df6f54a0","Type":"ContainerDied","Data":"be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af"} Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.369400 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjfp4" event={"ID":"d96657a1-b26e-4040-a586-1378df6f54a0","Type":"ContainerStarted","Data":"e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054"} Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.373025 4747 generic.go:334] "Generic (PLEG): container finished" podID="003133a2-9fe4-4567-ac85-b99fbee30003" containerID="96e48c7176bdb25a9048d4d6753f8603b74f6a8bb5c99000150ed2b968451cad" exitCode=0 Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.373079 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"003133a2-9fe4-4567-ac85-b99fbee30003","Type":"ContainerDied","Data":"96e48c7176bdb25a9048d4d6753f8603b74f6a8bb5c99000150ed2b968451cad"} Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.412251 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rjfp4" podStartSLOduration=1.8724394439999998 podStartE2EDuration="4.412227902s" podCreationTimestamp="2025-11-26 13:42:40 +0000 UTC" firstStartedPulling="2025-11-26 13:42:41.332503736 +0000 UTC m=+1648.318814751" lastFinishedPulling="2025-11-26 13:42:43.872292194 +0000 UTC m=+1650.858603209" observedRunningTime="2025-11-26 13:42:44.403845184 +0000 UTC m=+1651.390156209" watchObservedRunningTime="2025-11-26 13:42:44.412227902 +0000 UTC m=+1651.398538927" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.473839 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.548535 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.548935 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-logs\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.548974 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-var-locks-brick\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549001 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-config-data\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549043 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-nvme\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549082 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549116 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-scripts\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549149 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-httpd-run\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549170 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-sys\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549203 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-dev\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549227 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-iscsi\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549294 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw96m\" (UniqueName: \"kubernetes.io/projected/003133a2-9fe4-4567-ac85-b99fbee30003-kube-api-access-sw96m\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549346 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-run\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549372 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-lib-modules\") pod \"003133a2-9fe4-4567-ac85-b99fbee30003\" (UID: \"003133a2-9fe4-4567-ac85-b99fbee30003\") " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549420 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-logs" (OuterVolumeSpecName: "logs") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549684 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549743 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549785 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-sys" (OuterVolumeSpecName: "sys") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549798 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549808 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-dev" (OuterVolumeSpecName: "dev") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549825 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.549866 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.550118 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-run" (OuterVolumeSpecName: "run") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.551182 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.554424 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-scripts" (OuterVolumeSpecName: "scripts") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.554431 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance-cache") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.554838 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/003133a2-9fe4-4567-ac85-b99fbee30003-kube-api-access-sw96m" (OuterVolumeSpecName: "kube-api-access-sw96m") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "kube-api-access-sw96m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.555533 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.591788 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-config-data" (OuterVolumeSpecName: "config-data") pod "003133a2-9fe4-4567-ac85-b99fbee30003" (UID: "003133a2-9fe4-4567-ac85-b99fbee30003"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.650939 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.650979 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.650992 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.651031 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.651075 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003133a2-9fe4-4567-ac85-b99fbee30003-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.651089 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/003133a2-9fe4-4567-ac85-b99fbee30003-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.651102 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.651112 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.651122 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.651135 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw96m\" (UniqueName: \"kubernetes.io/projected/003133a2-9fe4-4567-ac85-b99fbee30003-kube-api-access-sw96m\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.651147 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.651157 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/003133a2-9fe4-4567-ac85-b99fbee30003-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.651177 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.664246 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.665393 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.752223 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:44 crc kubenswrapper[4747]: I1126 13:42:44.752252 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:45 crc kubenswrapper[4747]: I1126 13:42:45.383264 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:42:45 crc kubenswrapper[4747]: I1126 13:42:45.383335 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"003133a2-9fe4-4567-ac85-b99fbee30003","Type":"ContainerDied","Data":"594de66c98c3b338671e14f4891810d5241bf83836dfa28f7de446fdbbe1d328"} Nov 26 13:42:45 crc kubenswrapper[4747]: I1126 13:42:45.383371 4747 scope.go:117] "RemoveContainer" containerID="96e48c7176bdb25a9048d4d6753f8603b74f6a8bb5c99000150ed2b968451cad" Nov 26 13:42:45 crc kubenswrapper[4747]: I1126 13:42:45.407695 4747 scope.go:117] "RemoveContainer" containerID="30aa9effdd4dc750a1ba35376d9cac13db866c8d6cc87d8482be5d0932cf2d29" Nov 26 13:42:45 crc kubenswrapper[4747]: I1126 13:42:45.418493 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:42:45 crc kubenswrapper[4747]: I1126 13:42:45.439588 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:42:45 crc kubenswrapper[4747]: I1126 13:42:45.806440 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="003133a2-9fe4-4567-ac85-b99fbee30003" path="/var/lib/kubelet/pods/003133a2-9fe4-4567-ac85-b99fbee30003/volumes" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.295121 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-ql9bd"] Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.301698 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-ql9bd"] Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.321286 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance600c-account-delete-npvhq"] Nov 26 13:42:46 crc kubenswrapper[4747]: E1126 13:42:46.321566 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003133a2-9fe4-4567-ac85-b99fbee30003" containerName="glance-httpd" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.321577 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="003133a2-9fe4-4567-ac85-b99fbee30003" containerName="glance-httpd" Nov 26 13:42:46 crc kubenswrapper[4747]: E1126 13:42:46.321595 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003133a2-9fe4-4567-ac85-b99fbee30003" containerName="glance-log" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.321601 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="003133a2-9fe4-4567-ac85-b99fbee30003" containerName="glance-log" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.321736 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="003133a2-9fe4-4567-ac85-b99fbee30003" containerName="glance-httpd" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.321750 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="003133a2-9fe4-4567-ac85-b99fbee30003" containerName="glance-log" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.322200 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance600c-account-delete-npvhq" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.332565 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance600c-account-delete-npvhq"] Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.475167 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e707f43b-b21d-4720-a71a-dd80794a4be2-operator-scripts\") pod \"glance600c-account-delete-npvhq\" (UID: \"e707f43b-b21d-4720-a71a-dd80794a4be2\") " pod="glance-kuttl-tests/glance600c-account-delete-npvhq" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.475275 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4jcc\" (UniqueName: \"kubernetes.io/projected/e707f43b-b21d-4720-a71a-dd80794a4be2-kube-api-access-j4jcc\") pod \"glance600c-account-delete-npvhq\" (UID: \"e707f43b-b21d-4720-a71a-dd80794a4be2\") " pod="glance-kuttl-tests/glance600c-account-delete-npvhq" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.576664 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e707f43b-b21d-4720-a71a-dd80794a4be2-operator-scripts\") pod \"glance600c-account-delete-npvhq\" (UID: \"e707f43b-b21d-4720-a71a-dd80794a4be2\") " pod="glance-kuttl-tests/glance600c-account-delete-npvhq" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.576778 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4jcc\" (UniqueName: \"kubernetes.io/projected/e707f43b-b21d-4720-a71a-dd80794a4be2-kube-api-access-j4jcc\") pod \"glance600c-account-delete-npvhq\" (UID: \"e707f43b-b21d-4720-a71a-dd80794a4be2\") " pod="glance-kuttl-tests/glance600c-account-delete-npvhq" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.577739 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e707f43b-b21d-4720-a71a-dd80794a4be2-operator-scripts\") pod \"glance600c-account-delete-npvhq\" (UID: \"e707f43b-b21d-4720-a71a-dd80794a4be2\") " pod="glance-kuttl-tests/glance600c-account-delete-npvhq" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.603531 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4jcc\" (UniqueName: \"kubernetes.io/projected/e707f43b-b21d-4720-a71a-dd80794a4be2-kube-api-access-j4jcc\") pod \"glance600c-account-delete-npvhq\" (UID: \"e707f43b-b21d-4720-a71a-dd80794a4be2\") " pod="glance-kuttl-tests/glance600c-account-delete-npvhq" Nov 26 13:42:46 crc kubenswrapper[4747]: I1126 13:42:46.636944 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance600c-account-delete-npvhq" Nov 26 13:42:47 crc kubenswrapper[4747]: I1126 13:42:47.111630 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance600c-account-delete-npvhq"] Nov 26 13:42:47 crc kubenswrapper[4747]: W1126 13:42:47.117213 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode707f43b_b21d_4720_a71a_dd80794a4be2.slice/crio-eb4bd440905cce41dfa4a73652f2a666f73838b1ab526b5fcd7b6c451daa6ccb WatchSource:0}: Error finding container eb4bd440905cce41dfa4a73652f2a666f73838b1ab526b5fcd7b6c451daa6ccb: Status 404 returned error can't find the container with id eb4bd440905cce41dfa4a73652f2a666f73838b1ab526b5fcd7b6c451daa6ccb Nov 26 13:42:47 crc kubenswrapper[4747]: I1126 13:42:47.400210 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance600c-account-delete-npvhq" event={"ID":"e707f43b-b21d-4720-a71a-dd80794a4be2","Type":"ContainerStarted","Data":"0db4076a5fbc6ad1d70edb83ac0f8214c0cea71d954c0e02a738fd1d47ed3ee8"} Nov 26 13:42:47 crc kubenswrapper[4747]: I1126 13:42:47.400541 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance600c-account-delete-npvhq" event={"ID":"e707f43b-b21d-4720-a71a-dd80794a4be2","Type":"ContainerStarted","Data":"eb4bd440905cce41dfa4a73652f2a666f73838b1ab526b5fcd7b6c451daa6ccb"} Nov 26 13:42:47 crc kubenswrapper[4747]: I1126 13:42:47.418716 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance600c-account-delete-npvhq" podStartSLOduration=1.418692343 podStartE2EDuration="1.418692343s" podCreationTimestamp="2025-11-26 13:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:42:47.412528791 +0000 UTC m=+1654.398839806" watchObservedRunningTime="2025-11-26 13:42:47.418692343 +0000 UTC m=+1654.405003368" Nov 26 13:42:47 crc kubenswrapper[4747]: I1126 13:42:47.827440 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d52d49d1-bd23-4029-b3cf-2375a6785d36" path="/var/lib/kubelet/pods/d52d49d1-bd23-4029-b3cf-2375a6785d36/volumes" Nov 26 13:42:48 crc kubenswrapper[4747]: I1126 13:42:48.408081 4747 generic.go:334] "Generic (PLEG): container finished" podID="e707f43b-b21d-4720-a71a-dd80794a4be2" containerID="0db4076a5fbc6ad1d70edb83ac0f8214c0cea71d954c0e02a738fd1d47ed3ee8" exitCode=0 Nov 26 13:42:48 crc kubenswrapper[4747]: I1126 13:42:48.408154 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance600c-account-delete-npvhq" event={"ID":"e707f43b-b21d-4720-a71a-dd80794a4be2","Type":"ContainerDied","Data":"0db4076a5fbc6ad1d70edb83ac0f8214c0cea71d954c0e02a738fd1d47ed3ee8"} Nov 26 13:42:49 crc kubenswrapper[4747]: I1126 13:42:49.707167 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance600c-account-delete-npvhq" Nov 26 13:42:49 crc kubenswrapper[4747]: I1126 13:42:49.821417 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4jcc\" (UniqueName: \"kubernetes.io/projected/e707f43b-b21d-4720-a71a-dd80794a4be2-kube-api-access-j4jcc\") pod \"e707f43b-b21d-4720-a71a-dd80794a4be2\" (UID: \"e707f43b-b21d-4720-a71a-dd80794a4be2\") " Nov 26 13:42:49 crc kubenswrapper[4747]: I1126 13:42:49.821651 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e707f43b-b21d-4720-a71a-dd80794a4be2-operator-scripts\") pod \"e707f43b-b21d-4720-a71a-dd80794a4be2\" (UID: \"e707f43b-b21d-4720-a71a-dd80794a4be2\") " Nov 26 13:42:49 crc kubenswrapper[4747]: I1126 13:42:49.822331 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e707f43b-b21d-4720-a71a-dd80794a4be2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e707f43b-b21d-4720-a71a-dd80794a4be2" (UID: "e707f43b-b21d-4720-a71a-dd80794a4be2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:42:49 crc kubenswrapper[4747]: I1126 13:42:49.830586 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e707f43b-b21d-4720-a71a-dd80794a4be2-kube-api-access-j4jcc" (OuterVolumeSpecName: "kube-api-access-j4jcc") pod "e707f43b-b21d-4720-a71a-dd80794a4be2" (UID: "e707f43b-b21d-4720-a71a-dd80794a4be2"). InnerVolumeSpecName "kube-api-access-j4jcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:49 crc kubenswrapper[4747]: I1126 13:42:49.923530 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e707f43b-b21d-4720-a71a-dd80794a4be2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:49 crc kubenswrapper[4747]: I1126 13:42:49.923568 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4jcc\" (UniqueName: \"kubernetes.io/projected/e707f43b-b21d-4720-a71a-dd80794a4be2-kube-api-access-j4jcc\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:50 crc kubenswrapper[4747]: I1126 13:42:50.427946 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance600c-account-delete-npvhq" event={"ID":"e707f43b-b21d-4720-a71a-dd80794a4be2","Type":"ContainerDied","Data":"eb4bd440905cce41dfa4a73652f2a666f73838b1ab526b5fcd7b6c451daa6ccb"} Nov 26 13:42:50 crc kubenswrapper[4747]: I1126 13:42:50.428000 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb4bd440905cce41dfa4a73652f2a666f73838b1ab526b5fcd7b6c451daa6ccb" Nov 26 13:42:50 crc kubenswrapper[4747]: I1126 13:42:50.428005 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance600c-account-delete-npvhq" Nov 26 13:42:50 crc kubenswrapper[4747]: I1126 13:42:50.594500 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:50 crc kubenswrapper[4747]: I1126 13:42:50.594546 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:50 crc kubenswrapper[4747]: I1126 13:42:50.678856 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.206268 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 26 13:42:51 crc kubenswrapper[4747]: E1126 13:42:51.206628 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e707f43b-b21d-4720-a71a-dd80794a4be2" containerName="mariadb-account-delete" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.206649 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e707f43b-b21d-4720-a71a-dd80794a4be2" containerName="mariadb-account-delete" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.206886 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e707f43b-b21d-4720-a71a-dd80794a4be2" containerName="mariadb-account-delete" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.207627 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.210306 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.210852 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.210983 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.211736 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-zkhw9" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.222934 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.241941 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24ks\" (UniqueName: \"kubernetes.io/projected/b12599e7-b7ee-490d-b697-e1c48161d7a6-kube-api-access-n24ks\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.242098 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b12599e7-b7ee-490d-b697-e1c48161d7a6-openstack-config\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.242262 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/b12599e7-b7ee-490d-b697-e1c48161d7a6-openstack-scripts\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.242457 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b12599e7-b7ee-490d-b697-e1c48161d7a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.344142 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b12599e7-b7ee-490d-b697-e1c48161d7a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.344265 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b12599e7-b7ee-490d-b697-e1c48161d7a6-openstack-config\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.344307 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n24ks\" (UniqueName: \"kubernetes.io/projected/b12599e7-b7ee-490d-b697-e1c48161d7a6-kube-api-access-n24ks\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.344415 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/b12599e7-b7ee-490d-b697-e1c48161d7a6-openstack-scripts\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.345261 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b12599e7-b7ee-490d-b697-e1c48161d7a6-openstack-config\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.347352 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/b12599e7-b7ee-490d-b697-e1c48161d7a6-openstack-scripts\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.352289 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b12599e7-b7ee-490d-b697-e1c48161d7a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.382378 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-8jt54"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.392860 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n24ks\" (UniqueName: \"kubernetes.io/projected/b12599e7-b7ee-490d-b697-e1c48161d7a6-kube-api-access-n24ks\") pod \"openstackclient\" (UID: \"b12599e7-b7ee-490d-b697-e1c48161d7a6\") " pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.397337 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-8jt54"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.407705 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance600c-account-delete-npvhq"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.415980 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-600c-account-create-update-vq9k8"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.424180 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance600c-account-delete-npvhq"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.432888 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-600c-account-create-update-vq9k8"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.442752 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-5dxfx"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.444125 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-5dxfx" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.449649 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-5dxfx"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.486663 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.531700 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rjfp4"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.536333 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.546824 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7da5be4-98ba-46b8-a15b-db43a4b12968-operator-scripts\") pod \"glance-db-create-5dxfx\" (UID: \"b7da5be4-98ba-46b8-a15b-db43a4b12968\") " pod="glance-kuttl-tests/glance-db-create-5dxfx" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.546969 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfbr2\" (UniqueName: \"kubernetes.io/projected/b7da5be4-98ba-46b8-a15b-db43a4b12968-kube-api-access-kfbr2\") pod \"glance-db-create-5dxfx\" (UID: \"b7da5be4-98ba-46b8-a15b-db43a4b12968\") " pod="glance-kuttl-tests/glance-db-create-5dxfx" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.550764 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-e5bb-account-create-update-bxznw"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.551711 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.554630 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.564364 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-e5bb-account-create-update-bxznw"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.649098 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7da5be4-98ba-46b8-a15b-db43a4b12968-operator-scripts\") pod \"glance-db-create-5dxfx\" (UID: \"b7da5be4-98ba-46b8-a15b-db43a4b12968\") " pod="glance-kuttl-tests/glance-db-create-5dxfx" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.649165 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c06574-7633-4b77-aea8-c99b389ef6e0-operator-scripts\") pod \"glance-e5bb-account-create-update-bxznw\" (UID: \"e5c06574-7633-4b77-aea8-c99b389ef6e0\") " pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.649212 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfbr2\" (UniqueName: \"kubernetes.io/projected/b7da5be4-98ba-46b8-a15b-db43a4b12968-kube-api-access-kfbr2\") pod \"glance-db-create-5dxfx\" (UID: \"b7da5be4-98ba-46b8-a15b-db43a4b12968\") " pod="glance-kuttl-tests/glance-db-create-5dxfx" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.649240 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72244\" (UniqueName: \"kubernetes.io/projected/e5c06574-7633-4b77-aea8-c99b389ef6e0-kube-api-access-72244\") pod \"glance-e5bb-account-create-update-bxznw\" (UID: \"e5c06574-7633-4b77-aea8-c99b389ef6e0\") " pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.650559 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7da5be4-98ba-46b8-a15b-db43a4b12968-operator-scripts\") pod \"glance-db-create-5dxfx\" (UID: \"b7da5be4-98ba-46b8-a15b-db43a4b12968\") " pod="glance-kuttl-tests/glance-db-create-5dxfx" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.670114 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfbr2\" (UniqueName: \"kubernetes.io/projected/b7da5be4-98ba-46b8-a15b-db43a4b12968-kube-api-access-kfbr2\") pod \"glance-db-create-5dxfx\" (UID: \"b7da5be4-98ba-46b8-a15b-db43a4b12968\") " pod="glance-kuttl-tests/glance-db-create-5dxfx" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.750641 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c06574-7633-4b77-aea8-c99b389ef6e0-operator-scripts\") pod \"glance-e5bb-account-create-update-bxznw\" (UID: \"e5c06574-7633-4b77-aea8-c99b389ef6e0\") " pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.750734 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72244\" (UniqueName: \"kubernetes.io/projected/e5c06574-7633-4b77-aea8-c99b389ef6e0-kube-api-access-72244\") pod \"glance-e5bb-account-create-update-bxznw\" (UID: \"e5c06574-7633-4b77-aea8-c99b389ef6e0\") " pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.752684 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c06574-7633-4b77-aea8-c99b389ef6e0-operator-scripts\") pod \"glance-e5bb-account-create-update-bxznw\" (UID: \"e5c06574-7633-4b77-aea8-c99b389ef6e0\") " pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.764239 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.770013 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-5dxfx" Nov 26 13:42:51 crc kubenswrapper[4747]: W1126 13:42:51.770484 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb12599e7_b7ee_490d_b697_e1c48161d7a6.slice/crio-a57d9b08dfa5b014813deef7344d13813f7b7a157063145fb5c720ec2fabc485 WatchSource:0}: Error finding container a57d9b08dfa5b014813deef7344d13813f7b7a157063145fb5c720ec2fabc485: Status 404 returned error can't find the container with id a57d9b08dfa5b014813deef7344d13813f7b7a157063145fb5c720ec2fabc485 Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.771236 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72244\" (UniqueName: \"kubernetes.io/projected/e5c06574-7633-4b77-aea8-c99b389ef6e0-kube-api-access-72244\") pod \"glance-e5bb-account-create-update-bxznw\" (UID: \"e5c06574-7633-4b77-aea8-c99b389ef6e0\") " pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.809597 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39a1e2aa-8e90-4d50-aa8a-905474945f53" path="/var/lib/kubelet/pods/39a1e2aa-8e90-4d50-aa8a-905474945f53/volumes" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.810247 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e707f43b-b21d-4720-a71a-dd80794a4be2" path="/var/lib/kubelet/pods/e707f43b-b21d-4720-a71a-dd80794a4be2/volumes" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.810756 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb75f03c-de62-477e-bcc4-f4eb2ea7b03d" path="/var/lib/kubelet/pods/fb75f03c-de62-477e-bcc4-f4eb2ea7b03d/volumes" Nov 26 13:42:51 crc kubenswrapper[4747]: I1126 13:42:51.919884 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" Nov 26 13:42:52 crc kubenswrapper[4747]: I1126 13:42:52.173674 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-5dxfx"] Nov 26 13:42:52 crc kubenswrapper[4747]: W1126 13:42:52.174754 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7da5be4_98ba_46b8_a15b_db43a4b12968.slice/crio-85e9e2b64a9b9c16111e016ffb07608859f614400294e15861896d48dede5d66 WatchSource:0}: Error finding container 85e9e2b64a9b9c16111e016ffb07608859f614400294e15861896d48dede5d66: Status 404 returned error can't find the container with id 85e9e2b64a9b9c16111e016ffb07608859f614400294e15861896d48dede5d66 Nov 26 13:42:52 crc kubenswrapper[4747]: I1126 13:42:52.312268 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-e5bb-account-create-update-bxznw"] Nov 26 13:42:52 crc kubenswrapper[4747]: W1126 13:42:52.315565 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5c06574_7633_4b77_aea8_c99b389ef6e0.slice/crio-b017d4c71950264487d6379beb734df0298f166da9a7bace94020b1748bd2ed9 WatchSource:0}: Error finding container b017d4c71950264487d6379beb734df0298f166da9a7bace94020b1748bd2ed9: Status 404 returned error can't find the container with id b017d4c71950264487d6379beb734df0298f166da9a7bace94020b1748bd2ed9 Nov 26 13:42:52 crc kubenswrapper[4747]: I1126 13:42:52.456886 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-5dxfx" event={"ID":"b7da5be4-98ba-46b8-a15b-db43a4b12968","Type":"ContainerStarted","Data":"97531b2c47bfe5331439d40caccf2c75bc1561929289dc519e0c4161241b0bf0"} Nov 26 13:42:52 crc kubenswrapper[4747]: I1126 13:42:52.457506 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-5dxfx" event={"ID":"b7da5be4-98ba-46b8-a15b-db43a4b12968","Type":"ContainerStarted","Data":"85e9e2b64a9b9c16111e016ffb07608859f614400294e15861896d48dede5d66"} Nov 26 13:42:52 crc kubenswrapper[4747]: I1126 13:42:52.459936 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"b12599e7-b7ee-490d-b697-e1c48161d7a6","Type":"ContainerStarted","Data":"119c1733a896a6f5c259d22600979043e9b26868c3396d54659961310138e88c"} Nov 26 13:42:52 crc kubenswrapper[4747]: I1126 13:42:52.459980 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"b12599e7-b7ee-490d-b697-e1c48161d7a6","Type":"ContainerStarted","Data":"a57d9b08dfa5b014813deef7344d13813f7b7a157063145fb5c720ec2fabc485"} Nov 26 13:42:52 crc kubenswrapper[4747]: I1126 13:42:52.461921 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" event={"ID":"e5c06574-7633-4b77-aea8-c99b389ef6e0","Type":"ContainerStarted","Data":"b017d4c71950264487d6379beb734df0298f166da9a7bace94020b1748bd2ed9"} Nov 26 13:42:52 crc kubenswrapper[4747]: I1126 13:42:52.484696 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-create-5dxfx" podStartSLOduration=1.484676224 podStartE2EDuration="1.484676224s" podCreationTimestamp="2025-11-26 13:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:42:52.47967251 +0000 UTC m=+1659.465983515" watchObservedRunningTime="2025-11-26 13:42:52.484676224 +0000 UTC m=+1659.470987239" Nov 26 13:42:52 crc kubenswrapper[4747]: I1126 13:42:52.498416 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=1.498395223 podStartE2EDuration="1.498395223s" podCreationTimestamp="2025-11-26 13:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:42:52.496001684 +0000 UTC m=+1659.482312699" watchObservedRunningTime="2025-11-26 13:42:52.498395223 +0000 UTC m=+1659.484706238" Nov 26 13:42:53 crc kubenswrapper[4747]: I1126 13:42:53.475280 4747 generic.go:334] "Generic (PLEG): container finished" podID="b7da5be4-98ba-46b8-a15b-db43a4b12968" containerID="97531b2c47bfe5331439d40caccf2c75bc1561929289dc519e0c4161241b0bf0" exitCode=0 Nov 26 13:42:53 crc kubenswrapper[4747]: I1126 13:42:53.475410 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-5dxfx" event={"ID":"b7da5be4-98ba-46b8-a15b-db43a4b12968","Type":"ContainerDied","Data":"97531b2c47bfe5331439d40caccf2c75bc1561929289dc519e0c4161241b0bf0"} Nov 26 13:42:53 crc kubenswrapper[4747]: I1126 13:42:53.478401 4747 generic.go:334] "Generic (PLEG): container finished" podID="e5c06574-7633-4b77-aea8-c99b389ef6e0" containerID="fb98bf1996c7b469c4f03090e93c44b16e58d8fc5e8d77342fe90963fc72383c" exitCode=0 Nov 26 13:42:53 crc kubenswrapper[4747]: I1126 13:42:53.478491 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" event={"ID":"e5c06574-7633-4b77-aea8-c99b389ef6e0","Type":"ContainerDied","Data":"fb98bf1996c7b469c4f03090e93c44b16e58d8fc5e8d77342fe90963fc72383c"} Nov 26 13:42:53 crc kubenswrapper[4747]: I1126 13:42:53.478967 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rjfp4" podUID="d96657a1-b26e-4040-a586-1378df6f54a0" containerName="registry-server" containerID="cri-o://e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054" gracePeriod=2 Nov 26 13:42:53 crc kubenswrapper[4747]: I1126 13:42:53.806492 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:42:53 crc kubenswrapper[4747]: E1126 13:42:53.807219 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:42:54 crc kubenswrapper[4747]: I1126 13:42:54.846356 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" Nov 26 13:42:54 crc kubenswrapper[4747]: I1126 13:42:54.898737 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-5dxfx" Nov 26 13:42:54 crc kubenswrapper[4747]: I1126 13:42:54.905030 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72244\" (UniqueName: \"kubernetes.io/projected/e5c06574-7633-4b77-aea8-c99b389ef6e0-kube-api-access-72244\") pod \"e5c06574-7633-4b77-aea8-c99b389ef6e0\" (UID: \"e5c06574-7633-4b77-aea8-c99b389ef6e0\") " Nov 26 13:42:54 crc kubenswrapper[4747]: I1126 13:42:54.905090 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c06574-7633-4b77-aea8-c99b389ef6e0-operator-scripts\") pod \"e5c06574-7633-4b77-aea8-c99b389ef6e0\" (UID: \"e5c06574-7633-4b77-aea8-c99b389ef6e0\") " Nov 26 13:42:54 crc kubenswrapper[4747]: I1126 13:42:54.906241 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c06574-7633-4b77-aea8-c99b389ef6e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5c06574-7633-4b77-aea8-c99b389ef6e0" (UID: "e5c06574-7633-4b77-aea8-c99b389ef6e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:42:54 crc kubenswrapper[4747]: I1126 13:42:54.913725 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c06574-7633-4b77-aea8-c99b389ef6e0-kube-api-access-72244" (OuterVolumeSpecName: "kube-api-access-72244") pod "e5c06574-7633-4b77-aea8-c99b389ef6e0" (UID: "e5c06574-7633-4b77-aea8-c99b389ef6e0"). InnerVolumeSpecName "kube-api-access-72244". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.006822 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7da5be4-98ba-46b8-a15b-db43a4b12968-operator-scripts\") pod \"b7da5be4-98ba-46b8-a15b-db43a4b12968\" (UID: \"b7da5be4-98ba-46b8-a15b-db43a4b12968\") " Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.007037 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfbr2\" (UniqueName: \"kubernetes.io/projected/b7da5be4-98ba-46b8-a15b-db43a4b12968-kube-api-access-kfbr2\") pod \"b7da5be4-98ba-46b8-a15b-db43a4b12968\" (UID: \"b7da5be4-98ba-46b8-a15b-db43a4b12968\") " Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.007324 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72244\" (UniqueName: \"kubernetes.io/projected/e5c06574-7633-4b77-aea8-c99b389ef6e0-kube-api-access-72244\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.007341 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c06574-7633-4b77-aea8-c99b389ef6e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.007531 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7da5be4-98ba-46b8-a15b-db43a4b12968-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7da5be4-98ba-46b8-a15b-db43a4b12968" (UID: "b7da5be4-98ba-46b8-a15b-db43a4b12968"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.012446 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7da5be4-98ba-46b8-a15b-db43a4b12968-kube-api-access-kfbr2" (OuterVolumeSpecName: "kube-api-access-kfbr2") pod "b7da5be4-98ba-46b8-a15b-db43a4b12968" (UID: "b7da5be4-98ba-46b8-a15b-db43a4b12968"). InnerVolumeSpecName "kube-api-access-kfbr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.108711 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfbr2\" (UniqueName: \"kubernetes.io/projected/b7da5be4-98ba-46b8-a15b-db43a4b12968-kube-api-access-kfbr2\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.108744 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7da5be4-98ba-46b8-a15b-db43a4b12968-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.368616 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.413484 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-catalog-content\") pod \"d96657a1-b26e-4040-a586-1378df6f54a0\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.413572 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwtx9\" (UniqueName: \"kubernetes.io/projected/d96657a1-b26e-4040-a586-1378df6f54a0-kube-api-access-fwtx9\") pod \"d96657a1-b26e-4040-a586-1378df6f54a0\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.413614 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-utilities\") pod \"d96657a1-b26e-4040-a586-1378df6f54a0\" (UID: \"d96657a1-b26e-4040-a586-1378df6f54a0\") " Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.414807 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-utilities" (OuterVolumeSpecName: "utilities") pod "d96657a1-b26e-4040-a586-1378df6f54a0" (UID: "d96657a1-b26e-4040-a586-1378df6f54a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.417353 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96657a1-b26e-4040-a586-1378df6f54a0-kube-api-access-fwtx9" (OuterVolumeSpecName: "kube-api-access-fwtx9") pod "d96657a1-b26e-4040-a586-1378df6f54a0" (UID: "d96657a1-b26e-4040-a586-1378df6f54a0"). InnerVolumeSpecName "kube-api-access-fwtx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.454976 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d96657a1-b26e-4040-a586-1378df6f54a0" (UID: "d96657a1-b26e-4040-a586-1378df6f54a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.495238 4747 generic.go:334] "Generic (PLEG): container finished" podID="d96657a1-b26e-4040-a586-1378df6f54a0" containerID="e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054" exitCode=0 Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.495301 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjfp4" event={"ID":"d96657a1-b26e-4040-a586-1378df6f54a0","Type":"ContainerDied","Data":"e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054"} Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.495331 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjfp4" event={"ID":"d96657a1-b26e-4040-a586-1378df6f54a0","Type":"ContainerDied","Data":"bf9fe7eba816754ba1eda4cace08d89c0f54a59934c013a52c2a9b0664719045"} Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.495351 4747 scope.go:117] "RemoveContainer" containerID="e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.495557 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjfp4" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.496915 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-5dxfx" event={"ID":"b7da5be4-98ba-46b8-a15b-db43a4b12968","Type":"ContainerDied","Data":"85e9e2b64a9b9c16111e016ffb07608859f614400294e15861896d48dede5d66"} Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.496970 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85e9e2b64a9b9c16111e016ffb07608859f614400294e15861896d48dede5d66" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.496986 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-5dxfx" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.505921 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" event={"ID":"e5c06574-7633-4b77-aea8-c99b389ef6e0","Type":"ContainerDied","Data":"b017d4c71950264487d6379beb734df0298f166da9a7bace94020b1748bd2ed9"} Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.505968 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b017d4c71950264487d6379beb734df0298f166da9a7bace94020b1748bd2ed9" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.505968 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-e5bb-account-create-update-bxznw" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.515458 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwtx9\" (UniqueName: \"kubernetes.io/projected/d96657a1-b26e-4040-a586-1378df6f54a0-kube-api-access-fwtx9\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.515490 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.515502 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96657a1-b26e-4040-a586-1378df6f54a0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.515805 4747 scope.go:117] "RemoveContainer" containerID="be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.530124 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rjfp4"] Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.535083 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rjfp4"] Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.547207 4747 scope.go:117] "RemoveContainer" containerID="03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.562524 4747 scope.go:117] "RemoveContainer" containerID="e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054" Nov 26 13:42:55 crc kubenswrapper[4747]: E1126 13:42:55.562899 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054\": container with ID starting with e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054 not found: ID does not exist" containerID="e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.562926 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054"} err="failed to get container status \"e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054\": rpc error: code = NotFound desc = could not find container \"e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054\": container with ID starting with e0192c0da3cb4384240ffb26627491a96f760b15cbc30d0d31004480fb164054 not found: ID does not exist" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.562944 4747 scope.go:117] "RemoveContainer" containerID="be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af" Nov 26 13:42:55 crc kubenswrapper[4747]: E1126 13:42:55.563265 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af\": container with ID starting with be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af not found: ID does not exist" containerID="be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.563302 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af"} err="failed to get container status \"be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af\": rpc error: code = NotFound desc = could not find container \"be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af\": container with ID starting with be8d42301a7c46e0d908bdbfd1723432ecf28b83372b594e749ca2fb6ac9d6af not found: ID does not exist" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.563328 4747 scope.go:117] "RemoveContainer" containerID="03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690" Nov 26 13:42:55 crc kubenswrapper[4747]: E1126 13:42:55.563563 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690\": container with ID starting with 03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690 not found: ID does not exist" containerID="03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.563587 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690"} err="failed to get container status \"03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690\": rpc error: code = NotFound desc = could not find container \"03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690\": container with ID starting with 03c339490e5f75d4f8364032e59b7343b2a8e0fd1720379ed3c852bc04047690 not found: ID does not exist" Nov 26 13:42:55 crc kubenswrapper[4747]: I1126 13:42:55.812996 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96657a1-b26e-4040-a586-1378df6f54a0" path="/var/lib/kubelet/pods/d96657a1-b26e-4040-a586-1378df6f54a0/volumes" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.733612 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-79x4z"] Nov 26 13:42:56 crc kubenswrapper[4747]: E1126 13:42:56.733874 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c06574-7633-4b77-aea8-c99b389ef6e0" containerName="mariadb-account-create-update" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.733885 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c06574-7633-4b77-aea8-c99b389ef6e0" containerName="mariadb-account-create-update" Nov 26 13:42:56 crc kubenswrapper[4747]: E1126 13:42:56.733903 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96657a1-b26e-4040-a586-1378df6f54a0" containerName="extract-utilities" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.733910 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96657a1-b26e-4040-a586-1378df6f54a0" containerName="extract-utilities" Nov 26 13:42:56 crc kubenswrapper[4747]: E1126 13:42:56.733919 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96657a1-b26e-4040-a586-1378df6f54a0" containerName="registry-server" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.733925 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96657a1-b26e-4040-a586-1378df6f54a0" containerName="registry-server" Nov 26 13:42:56 crc kubenswrapper[4747]: E1126 13:42:56.733938 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7da5be4-98ba-46b8-a15b-db43a4b12968" containerName="mariadb-database-create" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.733944 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7da5be4-98ba-46b8-a15b-db43a4b12968" containerName="mariadb-database-create" Nov 26 13:42:56 crc kubenswrapper[4747]: E1126 13:42:56.733956 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96657a1-b26e-4040-a586-1378df6f54a0" containerName="extract-content" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.733961 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96657a1-b26e-4040-a586-1378df6f54a0" containerName="extract-content" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.734097 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96657a1-b26e-4040-a586-1378df6f54a0" containerName="registry-server" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.734111 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7da5be4-98ba-46b8-a15b-db43a4b12968" containerName="mariadb-database-create" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.734123 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c06574-7633-4b77-aea8-c99b389ef6e0" containerName="mariadb-account-create-update" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.734555 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.736220 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-pwlc6" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.736517 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.744243 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-79x4z"] Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.834938 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78n2q\" (UniqueName: \"kubernetes.io/projected/d98ec4ca-6da2-4622-a71f-64476e8ff68c-kube-api-access-78n2q\") pod \"glance-db-sync-79x4z\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.834979 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-config-data\") pod \"glance-db-sync-79x4z\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.835003 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-db-sync-config-data\") pod \"glance-db-sync-79x4z\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.936164 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78n2q\" (UniqueName: \"kubernetes.io/projected/d98ec4ca-6da2-4622-a71f-64476e8ff68c-kube-api-access-78n2q\") pod \"glance-db-sync-79x4z\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.936207 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-config-data\") pod \"glance-db-sync-79x4z\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.936232 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-db-sync-config-data\") pod \"glance-db-sync-79x4z\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.942602 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-db-sync-config-data\") pod \"glance-db-sync-79x4z\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.942935 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-config-data\") pod \"glance-db-sync-79x4z\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:56 crc kubenswrapper[4747]: I1126 13:42:56.954499 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78n2q\" (UniqueName: \"kubernetes.io/projected/d98ec4ca-6da2-4622-a71f-64476e8ff68c-kube-api-access-78n2q\") pod \"glance-db-sync-79x4z\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:57 crc kubenswrapper[4747]: I1126 13:42:57.049627 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:42:57 crc kubenswrapper[4747]: I1126 13:42:57.441580 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-79x4z"] Nov 26 13:42:57 crc kubenswrapper[4747]: W1126 13:42:57.446643 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd98ec4ca_6da2_4622_a71f_64476e8ff68c.slice/crio-9784c0e62a3d457f2e8c28f6b4948289767d9f996b8d161721a48ffd6955b222 WatchSource:0}: Error finding container 9784c0e62a3d457f2e8c28f6b4948289767d9f996b8d161721a48ffd6955b222: Status 404 returned error can't find the container with id 9784c0e62a3d457f2e8c28f6b4948289767d9f996b8d161721a48ffd6955b222 Nov 26 13:42:57 crc kubenswrapper[4747]: I1126 13:42:57.531246 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-79x4z" event={"ID":"d98ec4ca-6da2-4622-a71f-64476e8ff68c","Type":"ContainerStarted","Data":"9784c0e62a3d457f2e8c28f6b4948289767d9f996b8d161721a48ffd6955b222"} Nov 26 13:42:58 crc kubenswrapper[4747]: I1126 13:42:58.540541 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-79x4z" event={"ID":"d98ec4ca-6da2-4622-a71f-64476e8ff68c","Type":"ContainerStarted","Data":"048e6e54c92371f216d0b3db5791641386d344c1e2b6601a16f126e8e5f1a108"} Nov 26 13:42:58 crc kubenswrapper[4747]: I1126 13:42:58.555710 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-79x4z" podStartSLOduration=2.555692575 podStartE2EDuration="2.555692575s" podCreationTimestamp="2025-11-26 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:42:58.552043195 +0000 UTC m=+1665.538354200" watchObservedRunningTime="2025-11-26 13:42:58.555692575 +0000 UTC m=+1665.542003590" Nov 26 13:43:01 crc kubenswrapper[4747]: I1126 13:43:01.587537 4747 generic.go:334] "Generic (PLEG): container finished" podID="d98ec4ca-6da2-4622-a71f-64476e8ff68c" containerID="048e6e54c92371f216d0b3db5791641386d344c1e2b6601a16f126e8e5f1a108" exitCode=0 Nov 26 13:43:01 crc kubenswrapper[4747]: I1126 13:43:01.587649 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-79x4z" event={"ID":"d98ec4ca-6da2-4622-a71f-64476e8ff68c","Type":"ContainerDied","Data":"048e6e54c92371f216d0b3db5791641386d344c1e2b6601a16f126e8e5f1a108"} Nov 26 13:43:02 crc kubenswrapper[4747]: I1126 13:43:02.901122 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:43:02 crc kubenswrapper[4747]: I1126 13:43:02.963024 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-config-data\") pod \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " Nov 26 13:43:02 crc kubenswrapper[4747]: I1126 13:43:02.963090 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-db-sync-config-data\") pod \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " Nov 26 13:43:02 crc kubenswrapper[4747]: I1126 13:43:02.963189 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78n2q\" (UniqueName: \"kubernetes.io/projected/d98ec4ca-6da2-4622-a71f-64476e8ff68c-kube-api-access-78n2q\") pod \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\" (UID: \"d98ec4ca-6da2-4622-a71f-64476e8ff68c\") " Nov 26 13:43:02 crc kubenswrapper[4747]: I1126 13:43:02.971584 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d98ec4ca-6da2-4622-a71f-64476e8ff68c" (UID: "d98ec4ca-6da2-4622-a71f-64476e8ff68c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:43:02 crc kubenswrapper[4747]: I1126 13:43:02.972550 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d98ec4ca-6da2-4622-a71f-64476e8ff68c-kube-api-access-78n2q" (OuterVolumeSpecName: "kube-api-access-78n2q") pod "d98ec4ca-6da2-4622-a71f-64476e8ff68c" (UID: "d98ec4ca-6da2-4622-a71f-64476e8ff68c"). InnerVolumeSpecName "kube-api-access-78n2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.017982 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-config-data" (OuterVolumeSpecName: "config-data") pod "d98ec4ca-6da2-4622-a71f-64476e8ff68c" (UID: "d98ec4ca-6da2-4622-a71f-64476e8ff68c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.065126 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.065159 4747 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d98ec4ca-6da2-4622-a71f-64476e8ff68c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.065170 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78n2q\" (UniqueName: \"kubernetes.io/projected/d98ec4ca-6da2-4622-a71f-64476e8ff68c-kube-api-access-78n2q\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.606300 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-79x4z" event={"ID":"d98ec4ca-6da2-4622-a71f-64476e8ff68c","Type":"ContainerDied","Data":"9784c0e62a3d457f2e8c28f6b4948289767d9f996b8d161721a48ffd6955b222"} Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.606604 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9784c0e62a3d457f2e8c28f6b4948289767d9f996b8d161721a48ffd6955b222" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.606403 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-79x4z" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.962952 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:43:03 crc kubenswrapper[4747]: E1126 13:43:03.963271 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98ec4ca-6da2-4622-a71f-64476e8ff68c" containerName="glance-db-sync" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.963283 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98ec4ca-6da2-4622-a71f-64476e8ff68c" containerName="glance-db-sync" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.963406 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98ec4ca-6da2-4622-a71f-64476e8ff68c" containerName="glance-db-sync" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.964169 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.966153 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.966526 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-pwlc6" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.966712 4747 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.980010 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.987089 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:43:03 crc kubenswrapper[4747]: I1126 13:43:03.988391 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.004190 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081471 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081520 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-config-data\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081544 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081703 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-config-data\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081752 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-nvme\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081813 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bswjr\" (UniqueName: \"kubernetes.io/projected/81885d61-1773-412a-823f-df25718c692a-kube-api-access-bswjr\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081845 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081873 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081899 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-scripts\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081930 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-dev\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.081990 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-logs\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082020 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082048 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-scripts\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082098 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-sys\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082139 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-httpd-run\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082167 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-lib-modules\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082240 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082314 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-dev\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082345 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082407 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082457 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-run\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082483 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-logs\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082504 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-httpd-run\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082522 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-sys\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082573 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-lib-modules\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082595 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-nvme\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082613 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-run\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.082629 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp9nx\" (UniqueName: \"kubernetes.io/projected/62670bf6-d017-4ff3-8403-62b642fe68e2-kube-api-access-bp9nx\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.183904 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-httpd-run\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.183946 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-sys\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.183968 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-lib-modules\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.183983 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-nvme\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184003 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp9nx\" (UniqueName: \"kubernetes.io/projected/62670bf6-d017-4ff3-8403-62b642fe68e2-kube-api-access-bp9nx\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184018 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-run\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184043 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184088 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-config-data\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184105 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184121 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-config-data\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184136 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-nvme\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184156 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bswjr\" (UniqueName: \"kubernetes.io/projected/81885d61-1773-412a-823f-df25718c692a-kube-api-access-bswjr\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184174 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184192 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184208 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-scripts\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184226 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-dev\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184249 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-logs\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184265 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184284 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-scripts\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184300 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-sys\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184318 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-httpd-run\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184336 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-lib-modules\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184353 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184377 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-dev\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184392 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184411 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184427 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-run\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184442 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-logs\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.184831 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-logs\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185085 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-httpd-run\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185185 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185241 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-nvme\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185307 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-sys\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185298 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-nvme\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185336 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-dev\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185361 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-run\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185344 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-lib-modules\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185384 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-lib-modules\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185457 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185535 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185573 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-run\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185581 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185602 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-dev\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185620 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185629 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-sys\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185682 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185691 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185222 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.186007 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-logs\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.185951 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-httpd-run\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.191209 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-scripts\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.191547 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-scripts\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.191764 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-config-data\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.191831 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-config-data\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.209625 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp9nx\" (UniqueName: \"kubernetes.io/projected/62670bf6-d017-4ff3-8403-62b642fe68e2-kube-api-access-bp9nx\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.211856 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bswjr\" (UniqueName: \"kubernetes.io/projected/81885d61-1773-412a-823f-df25718c692a-kube-api-access-bswjr\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.219921 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.222837 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.224893 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.248230 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.283865 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.304606 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.803224 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:43:04 crc kubenswrapper[4747]: W1126 13:43:04.807528 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81885d61_1773_412a_823f_df25718c692a.slice/crio-5085b7965cab44e8d6db6dd22f68a6e1ad4aaa0ce6c8433fc1d64e8d278f713c WatchSource:0}: Error finding container 5085b7965cab44e8d6db6dd22f68a6e1ad4aaa0ce6c8433fc1d64e8d278f713c: Status 404 returned error can't find the container with id 5085b7965cab44e8d6db6dd22f68a6e1ad4aaa0ce6c8433fc1d64e8d278f713c Nov 26 13:43:04 crc kubenswrapper[4747]: I1126 13:43:04.811262 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:43:04 crc kubenswrapper[4747]: W1126 13:43:04.817994 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62670bf6_d017_4ff3_8403_62b642fe68e2.slice/crio-f261241978b1f83912e0e74a09ed8b046ac393a327b6e425fcec6fc35f316dd9 WatchSource:0}: Error finding container f261241978b1f83912e0e74a09ed8b046ac393a327b6e425fcec6fc35f316dd9: Status 404 returned error can't find the container with id f261241978b1f83912e0e74a09ed8b046ac393a327b6e425fcec6fc35f316dd9 Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.417761 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.624612 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"62670bf6-d017-4ff3-8403-62b642fe68e2","Type":"ContainerStarted","Data":"4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164"} Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.625031 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="62670bf6-d017-4ff3-8403-62b642fe68e2" containerName="glance-httpd" containerID="cri-o://4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164" gracePeriod=30 Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.624798 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="62670bf6-d017-4ff3-8403-62b642fe68e2" containerName="glance-log" containerID="cri-o://21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4" gracePeriod=30 Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.627370 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"62670bf6-d017-4ff3-8403-62b642fe68e2","Type":"ContainerStarted","Data":"21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4"} Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.627401 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"62670bf6-d017-4ff3-8403-62b642fe68e2","Type":"ContainerStarted","Data":"f261241978b1f83912e0e74a09ed8b046ac393a327b6e425fcec6fc35f316dd9"} Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.627411 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"81885d61-1773-412a-823f-df25718c692a","Type":"ContainerStarted","Data":"c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0"} Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.627438 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"81885d61-1773-412a-823f-df25718c692a","Type":"ContainerStarted","Data":"7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d"} Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.627448 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"81885d61-1773-412a-823f-df25718c692a","Type":"ContainerStarted","Data":"5085b7965cab44e8d6db6dd22f68a6e1ad4aaa0ce6c8433fc1d64e8d278f713c"} Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.651725 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=2.6517062620000003 podStartE2EDuration="2.651706262s" podCreationTimestamp="2025-11-26 13:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:05.65119173 +0000 UTC m=+1672.637502765" watchObservedRunningTime="2025-11-26 13:43:05.651706262 +0000 UTC m=+1672.638017277" Nov 26 13:43:05 crc kubenswrapper[4747]: I1126 13:43:05.676106 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.676082766 podStartE2EDuration="2.676082766s" podCreationTimestamp="2025-11-26 13:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:05.675615384 +0000 UTC m=+1672.661926409" watchObservedRunningTime="2025-11-26 13:43:05.676082766 +0000 UTC m=+1672.662393781" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.155190 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.225977 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226141 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp9nx\" (UniqueName: \"kubernetes.io/projected/62670bf6-d017-4ff3-8403-62b642fe68e2-kube-api-access-bp9nx\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226171 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-run\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226196 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-nvme\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226220 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-sys\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226279 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-var-locks-brick\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226302 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-sys" (OuterVolumeSpecName: "sys") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226306 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-config-data\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226350 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226347 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226375 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-logs\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226445 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-iscsi\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226515 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-dev\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226536 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226556 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-httpd-run\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226569 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-dev" (OuterVolumeSpecName: "dev") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226574 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-lib-modules\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226592 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226629 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226663 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-scripts\") pod \"62670bf6-d017-4ff3-8403-62b642fe68e2\" (UID: \"62670bf6-d017-4ff3-8403-62b642fe68e2\") " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226660 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-logs" (OuterVolumeSpecName: "logs") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226257 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-run" (OuterVolumeSpecName: "run") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.226854 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.227250 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.227266 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.227277 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.227285 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.227293 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62670bf6-d017-4ff3-8403-62b642fe68e2-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.227301 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.227309 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.227317 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.227325 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62670bf6-d017-4ff3-8403-62b642fe68e2-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.246341 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance-cache") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.246467 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.246501 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-scripts" (OuterVolumeSpecName: "scripts") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.246633 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62670bf6-d017-4ff3-8403-62b642fe68e2-kube-api-access-bp9nx" (OuterVolumeSpecName: "kube-api-access-bp9nx") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "kube-api-access-bp9nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.263676 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-config-data" (OuterVolumeSpecName: "config-data") pod "62670bf6-d017-4ff3-8403-62b642fe68e2" (UID: "62670bf6-d017-4ff3-8403-62b642fe68e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.328631 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp9nx\" (UniqueName: \"kubernetes.io/projected/62670bf6-d017-4ff3-8403-62b642fe68e2-kube-api-access-bp9nx\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.328681 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.328717 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.328728 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62670bf6-d017-4ff3-8403-62b642fe68e2-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.328742 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.342277 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.342532 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.430498 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.430542 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:06 crc kubenswrapper[4747]: W1126 13:43:06.483756 4747 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-0.slice/session-c63.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-0.slice/session-c63.scope: no such file or directory Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.638882 4747 generic.go:334] "Generic (PLEG): container finished" podID="62670bf6-d017-4ff3-8403-62b642fe68e2" containerID="4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164" exitCode=143 Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.639439 4747 generic.go:334] "Generic (PLEG): container finished" podID="62670bf6-d017-4ff3-8403-62b642fe68e2" containerID="21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4" exitCode=143 Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.638991 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.639017 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"62670bf6-d017-4ff3-8403-62b642fe68e2","Type":"ContainerDied","Data":"4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164"} Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.639584 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"62670bf6-d017-4ff3-8403-62b642fe68e2","Type":"ContainerDied","Data":"21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4"} Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.639604 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"62670bf6-d017-4ff3-8403-62b642fe68e2","Type":"ContainerDied","Data":"f261241978b1f83912e0e74a09ed8b046ac393a327b6e425fcec6fc35f316dd9"} Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.639628 4747 scope.go:117] "RemoveContainer" containerID="4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.670323 4747 scope.go:117] "RemoveContainer" containerID="21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.680796 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.687936 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.701887 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:43:06 crc kubenswrapper[4747]: E1126 13:43:06.702226 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62670bf6-d017-4ff3-8403-62b642fe68e2" containerName="glance-log" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.702245 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="62670bf6-d017-4ff3-8403-62b642fe68e2" containerName="glance-log" Nov 26 13:43:06 crc kubenswrapper[4747]: E1126 13:43:06.702279 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62670bf6-d017-4ff3-8403-62b642fe68e2" containerName="glance-httpd" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.702285 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="62670bf6-d017-4ff3-8403-62b642fe68e2" containerName="glance-httpd" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.702402 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="62670bf6-d017-4ff3-8403-62b642fe68e2" containerName="glance-httpd" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.702414 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="62670bf6-d017-4ff3-8403-62b642fe68e2" containerName="glance-log" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.703036 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.707615 4747 scope.go:117] "RemoveContainer" containerID="4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164" Nov 26 13:43:06 crc kubenswrapper[4747]: E1126 13:43:06.708262 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164\": container with ID starting with 4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164 not found: ID does not exist" containerID="4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.708295 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164"} err="failed to get container status \"4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164\": rpc error: code = NotFound desc = could not find container \"4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164\": container with ID starting with 4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164 not found: ID does not exist" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.708320 4747 scope.go:117] "RemoveContainer" containerID="21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4" Nov 26 13:43:06 crc kubenswrapper[4747]: E1126 13:43:06.708703 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4\": container with ID starting with 21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4 not found: ID does not exist" containerID="21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.708823 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4"} err="failed to get container status \"21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4\": rpc error: code = NotFound desc = could not find container \"21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4\": container with ID starting with 21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4 not found: ID does not exist" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.708910 4747 scope.go:117] "RemoveContainer" containerID="4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.709675 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164"} err="failed to get container status \"4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164\": rpc error: code = NotFound desc = could not find container \"4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164\": container with ID starting with 4d771d3da9de4a9de3240b0ead6763d6c90a9e63dfdf1a08df7b4ac7617ef164 not found: ID does not exist" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.709704 4747 scope.go:117] "RemoveContainer" containerID="21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.710710 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4"} err="failed to get container status \"21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4\": rpc error: code = NotFound desc = could not find container \"21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4\": container with ID starting with 21ba8f2c86c28d493bfe9bd3f6c2fd379e9e2ffc26c11b21cecb989f161ec3b4 not found: ID does not exist" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.729311 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.841929 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-sys\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.841964 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.841981 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-dev\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.841996 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-run\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.842016 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f6318a7-4b12-49c6-9882-96c4d70893e8-httpd-run\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.842034 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.842061 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f6318a7-4b12-49c6-9882-96c4d70893e8-scripts\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.842080 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-etc-nvme\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.842106 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.842121 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6318a7-4b12-49c6-9882-96c4d70893e8-logs\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.842137 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.842161 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rzkh\" (UniqueName: \"kubernetes.io/projected/3f6318a7-4b12-49c6-9882-96c4d70893e8-kube-api-access-9rzkh\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.842197 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-lib-modules\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.842213 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6318a7-4b12-49c6-9882-96c4d70893e8-config-data\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943337 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943372 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-sys\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943389 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-dev\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943404 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-run\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943425 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f6318a7-4b12-49c6-9882-96c4d70893e8-httpd-run\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943443 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943456 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f6318a7-4b12-49c6-9882-96c4d70893e8-scripts\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943475 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-etc-nvme\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943503 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943518 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6318a7-4b12-49c6-9882-96c4d70893e8-logs\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943534 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943544 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943560 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rzkh\" (UniqueName: \"kubernetes.io/projected/3f6318a7-4b12-49c6-9882-96c4d70893e8-kube-api-access-9rzkh\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943596 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-lib-modules\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.943610 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6318a7-4b12-49c6-9882-96c4d70893e8-config-data\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.944325 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-lib-modules\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.944365 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.944385 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-run\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.944431 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-sys\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.944436 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-etc-nvme\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.944499 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.944454 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.944462 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f6318a7-4b12-49c6-9882-96c4d70893e8-dev\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.944595 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6318a7-4b12-49c6-9882-96c4d70893e8-logs\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.944650 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f6318a7-4b12-49c6-9882-96c4d70893e8-httpd-run\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.949271 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f6318a7-4b12-49c6-9882-96c4d70893e8-scripts\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.950303 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6318a7-4b12-49c6-9882-96c4d70893e8-config-data\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.966929 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:06 crc kubenswrapper[4747]: I1126 13:43:06.975251 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rzkh\" (UniqueName: \"kubernetes.io/projected/3f6318a7-4b12-49c6-9882-96c4d70893e8-kube-api-access-9rzkh\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:07 crc kubenswrapper[4747]: I1126 13:43:07.005186 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"3f6318a7-4b12-49c6-9882-96c4d70893e8\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:07 crc kubenswrapper[4747]: I1126 13:43:07.024343 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:07 crc kubenswrapper[4747]: I1126 13:43:07.450158 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 26 13:43:07 crc kubenswrapper[4747]: W1126 13:43:07.458992 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f6318a7_4b12_49c6_9882_96c4d70893e8.slice/crio-e8322e3e25bf9110fc1ba60dd22bff5ed5d23c9cd7ca499ba5c811b893b39510 WatchSource:0}: Error finding container e8322e3e25bf9110fc1ba60dd22bff5ed5d23c9cd7ca499ba5c811b893b39510: Status 404 returned error can't find the container with id e8322e3e25bf9110fc1ba60dd22bff5ed5d23c9cd7ca499ba5c811b893b39510 Nov 26 13:43:07 crc kubenswrapper[4747]: I1126 13:43:07.647643 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3f6318a7-4b12-49c6-9882-96c4d70893e8","Type":"ContainerStarted","Data":"39da3e92791061ea757c6f3a0f982d782d7545fcc0f877677f6e4dfcf9562fdc"} Nov 26 13:43:07 crc kubenswrapper[4747]: I1126 13:43:07.647704 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3f6318a7-4b12-49c6-9882-96c4d70893e8","Type":"ContainerStarted","Data":"e8322e3e25bf9110fc1ba60dd22bff5ed5d23c9cd7ca499ba5c811b893b39510"} Nov 26 13:43:07 crc kubenswrapper[4747]: I1126 13:43:07.800291 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:43:07 crc kubenswrapper[4747]: E1126 13:43:07.801311 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:43:07 crc kubenswrapper[4747]: I1126 13:43:07.811927 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62670bf6-d017-4ff3-8403-62b642fe68e2" path="/var/lib/kubelet/pods/62670bf6-d017-4ff3-8403-62b642fe68e2/volumes" Nov 26 13:43:08 crc kubenswrapper[4747]: I1126 13:43:08.667249 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3f6318a7-4b12-49c6-9882-96c4d70893e8","Type":"ContainerStarted","Data":"9eb07221042d84dd4aa5f66bdb6a3f250a348b04e6007de9101f60f65ae50a36"} Nov 26 13:43:08 crc kubenswrapper[4747]: I1126 13:43:08.692477 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=2.6924474529999998 podStartE2EDuration="2.692447453s" podCreationTimestamp="2025-11-26 13:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:08.691367676 +0000 UTC m=+1675.677678691" watchObservedRunningTime="2025-11-26 13:43:08.692447453 +0000 UTC m=+1675.678758468" Nov 26 13:43:14 crc kubenswrapper[4747]: I1126 13:43:14.284766 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:14 crc kubenswrapper[4747]: I1126 13:43:14.291127 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:14 crc kubenswrapper[4747]: I1126 13:43:14.331867 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:14 crc kubenswrapper[4747]: I1126 13:43:14.352968 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:14 crc kubenswrapper[4747]: I1126 13:43:14.748142 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:14 crc kubenswrapper[4747]: I1126 13:43:14.748518 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:16 crc kubenswrapper[4747]: I1126 13:43:16.707602 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:16 crc kubenswrapper[4747]: I1126 13:43:16.709496 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:17 crc kubenswrapper[4747]: I1126 13:43:17.025271 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:17 crc kubenswrapper[4747]: I1126 13:43:17.025332 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:17 crc kubenswrapper[4747]: I1126 13:43:17.054705 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:17 crc kubenswrapper[4747]: I1126 13:43:17.076204 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:17 crc kubenswrapper[4747]: I1126 13:43:17.774734 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:17 crc kubenswrapper[4747]: I1126 13:43:17.775150 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:18 crc kubenswrapper[4747]: I1126 13:43:18.156571 4747 scope.go:117] "RemoveContainer" containerID="6198fe079b5825c46974fc6030a48424241dd404cd43ceb4b71afa9e50815abf" Nov 26 13:43:18 crc kubenswrapper[4747]: I1126 13:43:18.206789 4747 scope.go:117] "RemoveContainer" containerID="8fd023de22b2ccf833b8216a3038a3a5d62b4fdd4d8aa0deb469e147374ca049" Nov 26 13:43:19 crc kubenswrapper[4747]: I1126 13:43:19.792344 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:43:19 crc kubenswrapper[4747]: I1126 13:43:19.792559 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:43:19 crc kubenswrapper[4747]: I1126 13:43:19.975457 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:20 crc kubenswrapper[4747]: I1126 13:43:20.050712 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Nov 26 13:43:20 crc kubenswrapper[4747]: I1126 13:43:20.095640 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:43:20 crc kubenswrapper[4747]: I1126 13:43:20.095860 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="81885d61-1773-412a-823f-df25718c692a" containerName="glance-log" containerID="cri-o://7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d" gracePeriod=30 Nov 26 13:43:20 crc kubenswrapper[4747]: I1126 13:43:20.096229 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="81885d61-1773-412a-823f-df25718c692a" containerName="glance-httpd" containerID="cri-o://c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0" gracePeriod=30 Nov 26 13:43:20 crc kubenswrapper[4747]: I1126 13:43:20.809184 4747 generic.go:334] "Generic (PLEG): container finished" podID="81885d61-1773-412a-823f-df25718c692a" containerID="7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d" exitCode=143 Nov 26 13:43:20 crc kubenswrapper[4747]: I1126 13:43:20.809306 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"81885d61-1773-412a-823f-df25718c692a","Type":"ContainerDied","Data":"7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d"} Nov 26 13:43:21 crc kubenswrapper[4747]: I1126 13:43:21.798795 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:43:21 crc kubenswrapper[4747]: E1126 13:43:21.799242 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.623389 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.749558 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-sys\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.749791 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bswjr\" (UniqueName: \"kubernetes.io/projected/81885d61-1773-412a-823f-df25718c692a-kube-api-access-bswjr\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.749816 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-var-locks-brick\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.749833 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-nvme\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.749852 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-httpd-run\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.749869 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-scripts\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.749939 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-dev\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.749967 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.750008 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-logs\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.750025 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-iscsi\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.750067 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.750084 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-lib-modules\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.750111 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-run\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.750138 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-config-data\") pod \"81885d61-1773-412a-823f-df25718c692a\" (UID: \"81885d61-1773-412a-823f-df25718c692a\") " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.750892 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-dev" (OuterVolumeSpecName: "dev") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.750935 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.750991 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.751073 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-sys" (OuterVolumeSpecName: "sys") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.751144 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.751364 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.751557 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.751653 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-run" (OuterVolumeSpecName: "run") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.751987 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-logs" (OuterVolumeSpecName: "logs") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.756362 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.756530 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance-cache") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.756695 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-scripts" (OuterVolumeSpecName: "scripts") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.759262 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81885d61-1773-412a-823f-df25718c692a-kube-api-access-bswjr" (OuterVolumeSpecName: "kube-api-access-bswjr") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "kube-api-access-bswjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.794247 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-config-data" (OuterVolumeSpecName: "config-data") pod "81885d61-1773-412a-823f-df25718c692a" (UID: "81885d61-1773-412a-823f-df25718c692a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.835746 4747 generic.go:334] "Generic (PLEG): container finished" podID="81885d61-1773-412a-823f-df25718c692a" containerID="c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0" exitCode=0 Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.835783 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.835798 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"81885d61-1773-412a-823f-df25718c692a","Type":"ContainerDied","Data":"c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0"} Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.835833 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"81885d61-1773-412a-823f-df25718c692a","Type":"ContainerDied","Data":"5085b7965cab44e8d6db6dd22f68a6e1ad4aaa0ce6c8433fc1d64e8d278f713c"} Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.835853 4747 scope.go:117] "RemoveContainer" containerID="c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861185 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861223 4747 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861318 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861335 4747 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861348 4747 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861358 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861445 4747 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-sys\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861461 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bswjr\" (UniqueName: \"kubernetes.io/projected/81885d61-1773-412a-823f-df25718c692a-kube-api-access-bswjr\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861473 4747 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861489 4747 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861500 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81885d61-1773-412a-823f-df25718c692a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861512 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81885d61-1773-412a-823f-df25718c692a-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861523 4747 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/81885d61-1773-412a-823f-df25718c692a-dev\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.861563 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.865001 4747 scope.go:117] "RemoveContainer" containerID="7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.867111 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.881779 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.886845 4747 scope.go:117] "RemoveContainer" containerID="c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0" Nov 26 13:43:23 crc kubenswrapper[4747]: E1126 13:43:23.888004 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0\": container with ID starting with c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0 not found: ID does not exist" containerID="c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.888105 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0"} err="failed to get container status \"c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0\": rpc error: code = NotFound desc = could not find container \"c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0\": container with ID starting with c74608d8e066ef88e0581032908472e9bbec32b12f84c35a97979c82b0eb2ce0 not found: ID does not exist" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.888140 4747 scope.go:117] "RemoveContainer" containerID="7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d" Nov 26 13:43:23 crc kubenswrapper[4747]: E1126 13:43:23.890510 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d\": container with ID starting with 7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d not found: ID does not exist" containerID="7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.890585 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d"} err="failed to get container status \"7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d\": rpc error: code = NotFound desc = could not find container \"7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d\": container with ID starting with 7fd4c63b1de9f66b81cd1aacbe715a151631ad2f58999d9fcb060881fa8c430d not found: ID does not exist" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.892464 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.893103 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.896795 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:43:23 crc kubenswrapper[4747]: E1126 13:43:23.899220 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81885d61-1773-412a-823f-df25718c692a" containerName="glance-log" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.899254 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="81885d61-1773-412a-823f-df25718c692a" containerName="glance-log" Nov 26 13:43:23 crc kubenswrapper[4747]: E1126 13:43:23.899275 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81885d61-1773-412a-823f-df25718c692a" containerName="glance-httpd" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.899282 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="81885d61-1773-412a-823f-df25718c692a" containerName="glance-httpd" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.899465 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="81885d61-1773-412a-823f-df25718c692a" containerName="glance-log" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.899488 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="81885d61-1773-412a-823f-df25718c692a" containerName="glance-httpd" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.900197 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.905104 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.963657 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.963706 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzthz\" (UniqueName: \"kubernetes.io/projected/464b8c3a-297c-4771-b17d-31e39d5bddae-kube-api-access-pzthz\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.963734 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464b8c3a-297c-4771-b17d-31e39d5bddae-logs\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.963756 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-sys\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.963801 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.963848 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464b8c3a-297c-4771-b17d-31e39d5bddae-config-data\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.963875 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-lib-modules\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.963918 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464b8c3a-297c-4771-b17d-31e39d5bddae-scripts\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.963942 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.963988 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-run\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.964088 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.964108 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-etc-nvme\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.964125 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/464b8c3a-297c-4771-b17d-31e39d5bddae-httpd-run\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.964139 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-dev\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.964152 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.964335 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.983310 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:23 crc kubenswrapper[4747]: I1126 13:43:23.983699 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066173 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464b8c3a-297c-4771-b17d-31e39d5bddae-config-data\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066230 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-lib-modules\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066261 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464b8c3a-297c-4771-b17d-31e39d5bddae-scripts\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066287 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-run\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066325 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-etc-nvme\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066349 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/464b8c3a-297c-4771-b17d-31e39d5bddae-httpd-run\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066363 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-dev\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066381 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066378 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-lib-modules\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066421 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066444 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzthz\" (UniqueName: \"kubernetes.io/projected/464b8c3a-297c-4771-b17d-31e39d5bddae-kube-api-access-pzthz\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066466 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464b8c3a-297c-4771-b17d-31e39d5bddae-logs\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066482 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-sys\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066570 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-sys\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066607 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-dev\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066653 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066675 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066958 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/464b8c3a-297c-4771-b17d-31e39d5bddae-httpd-run\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.066999 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-run\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.067026 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/464b8c3a-297c-4771-b17d-31e39d5bddae-etc-nvme\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.067185 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464b8c3a-297c-4771-b17d-31e39d5bddae-logs\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.069471 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464b8c3a-297c-4771-b17d-31e39d5bddae-scripts\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.071097 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464b8c3a-297c-4771-b17d-31e39d5bddae-config-data\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.083165 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzthz\" (UniqueName: \"kubernetes.io/projected/464b8c3a-297c-4771-b17d-31e39d5bddae-kube-api-access-pzthz\") pod \"glance-default-single-0\" (UID: \"464b8c3a-297c-4771-b17d-31e39d5bddae\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.216007 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.634781 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 26 13:43:24 crc kubenswrapper[4747]: W1126 13:43:24.638203 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464b8c3a_297c_4771_b17d_31e39d5bddae.slice/crio-e606d9d19119be8a8e3affba147235011a953051dbdbdf9e6843e9c62b5ed0c0 WatchSource:0}: Error finding container e606d9d19119be8a8e3affba147235011a953051dbdbdf9e6843e9c62b5ed0c0: Status 404 returned error can't find the container with id e606d9d19119be8a8e3affba147235011a953051dbdbdf9e6843e9c62b5ed0c0 Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.845753 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"464b8c3a-297c-4771-b17d-31e39d5bddae","Type":"ContainerStarted","Data":"4ec56f7142ee0e19fb97030c1ecef02581e7eff5dd5a420f7f11ab7c85a98d4d"} Nov 26 13:43:24 crc kubenswrapper[4747]: I1126 13:43:24.845811 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"464b8c3a-297c-4771-b17d-31e39d5bddae","Type":"ContainerStarted","Data":"e606d9d19119be8a8e3affba147235011a953051dbdbdf9e6843e9c62b5ed0c0"} Nov 26 13:43:25 crc kubenswrapper[4747]: I1126 13:43:25.807028 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81885d61-1773-412a-823f-df25718c692a" path="/var/lib/kubelet/pods/81885d61-1773-412a-823f-df25718c692a/volumes" Nov 26 13:43:25 crc kubenswrapper[4747]: I1126 13:43:25.869421 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"464b8c3a-297c-4771-b17d-31e39d5bddae","Type":"ContainerStarted","Data":"1b84ecc44cfd90a63cc74069524208b4a3f50dc0f02e3cdd59d62fe435affd65"} Nov 26 13:43:25 crc kubenswrapper[4747]: I1126 13:43:25.913678 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.913649451 podStartE2EDuration="2.913649451s" podCreationTimestamp="2025-11-26 13:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:25.904360931 +0000 UTC m=+1692.890672006" watchObservedRunningTime="2025-11-26 13:43:25.913649451 +0000 UTC m=+1692.899960476" Nov 26 13:43:32 crc kubenswrapper[4747]: I1126 13:43:32.798934 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:43:32 crc kubenswrapper[4747]: E1126 13:43:32.800113 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:43:34 crc kubenswrapper[4747]: I1126 13:43:34.217421 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:34 crc kubenswrapper[4747]: I1126 13:43:34.218459 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:34 crc kubenswrapper[4747]: I1126 13:43:34.267236 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:34 crc kubenswrapper[4747]: I1126 13:43:34.271201 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:34 crc kubenswrapper[4747]: I1126 13:43:34.949397 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:34 crc kubenswrapper[4747]: I1126 13:43:34.949705 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:36 crc kubenswrapper[4747]: I1126 13:43:36.889942 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:36 crc kubenswrapper[4747]: I1126 13:43:36.891514 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 26 13:43:43 crc kubenswrapper[4747]: I1126 13:43:43.803543 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:43:43 crc kubenswrapper[4747]: E1126 13:43:43.804269 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:43:57 crc kubenswrapper[4747]: I1126 13:43:57.798562 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:43:57 crc kubenswrapper[4747]: E1126 13:43:57.799454 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:44:09 crc kubenswrapper[4747]: I1126 13:44:09.798864 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:44:09 crc kubenswrapper[4747]: E1126 13:44:09.799854 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.734479 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cj8ql/must-gather-cthw6"] Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.736533 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj8ql/must-gather-cthw6" Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.740402 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cj8ql"/"default-dockercfg-kzdtl" Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.743746 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cj8ql"/"kube-root-ca.crt" Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.743774 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cj8ql"/"openshift-service-ca.crt" Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.748766 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cj8ql/must-gather-cthw6"] Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.892016 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79bc3423-0e25-484f-b2a7-ce30a75962f4-must-gather-output\") pod \"must-gather-cthw6\" (UID: \"79bc3423-0e25-484f-b2a7-ce30a75962f4\") " pod="openshift-must-gather-cj8ql/must-gather-cthw6" Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.892313 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4852\" (UniqueName: \"kubernetes.io/projected/79bc3423-0e25-484f-b2a7-ce30a75962f4-kube-api-access-p4852\") pod \"must-gather-cthw6\" (UID: \"79bc3423-0e25-484f-b2a7-ce30a75962f4\") " pod="openshift-must-gather-cj8ql/must-gather-cthw6" Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.993507 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79bc3423-0e25-484f-b2a7-ce30a75962f4-must-gather-output\") pod \"must-gather-cthw6\" (UID: \"79bc3423-0e25-484f-b2a7-ce30a75962f4\") " pod="openshift-must-gather-cj8ql/must-gather-cthw6" Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.993613 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4852\" (UniqueName: \"kubernetes.io/projected/79bc3423-0e25-484f-b2a7-ce30a75962f4-kube-api-access-p4852\") pod \"must-gather-cthw6\" (UID: \"79bc3423-0e25-484f-b2a7-ce30a75962f4\") " pod="openshift-must-gather-cj8ql/must-gather-cthw6" Nov 26 13:44:10 crc kubenswrapper[4747]: I1126 13:44:10.994036 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79bc3423-0e25-484f-b2a7-ce30a75962f4-must-gather-output\") pod \"must-gather-cthw6\" (UID: \"79bc3423-0e25-484f-b2a7-ce30a75962f4\") " pod="openshift-must-gather-cj8ql/must-gather-cthw6" Nov 26 13:44:11 crc kubenswrapper[4747]: I1126 13:44:11.012774 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4852\" (UniqueName: \"kubernetes.io/projected/79bc3423-0e25-484f-b2a7-ce30a75962f4-kube-api-access-p4852\") pod \"must-gather-cthw6\" (UID: \"79bc3423-0e25-484f-b2a7-ce30a75962f4\") " pod="openshift-must-gather-cj8ql/must-gather-cthw6" Nov 26 13:44:11 crc kubenswrapper[4747]: I1126 13:44:11.056949 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj8ql/must-gather-cthw6" Nov 26 13:44:11 crc kubenswrapper[4747]: I1126 13:44:11.487742 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cj8ql/must-gather-cthw6"] Nov 26 13:44:11 crc kubenswrapper[4747]: W1126 13:44:11.492403 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79bc3423_0e25_484f_b2a7_ce30a75962f4.slice/crio-2af72734d6c559e5fed6a3ca3a2460ff19cf889a36a3d1b171eb8021f4f01ec1 WatchSource:0}: Error finding container 2af72734d6c559e5fed6a3ca3a2460ff19cf889a36a3d1b171eb8021f4f01ec1: Status 404 returned error can't find the container with id 2af72734d6c559e5fed6a3ca3a2460ff19cf889a36a3d1b171eb8021f4f01ec1 Nov 26 13:44:12 crc kubenswrapper[4747]: I1126 13:44:12.295085 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj8ql/must-gather-cthw6" event={"ID":"79bc3423-0e25-484f-b2a7-ce30a75962f4","Type":"ContainerStarted","Data":"2af72734d6c559e5fed6a3ca3a2460ff19cf889a36a3d1b171eb8021f4f01ec1"} Nov 26 13:44:16 crc kubenswrapper[4747]: I1126 13:44:16.331655 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj8ql/must-gather-cthw6" event={"ID":"79bc3423-0e25-484f-b2a7-ce30a75962f4","Type":"ContainerStarted","Data":"cd8025e63c3b3751fd7adbd79835fef6c4137bb90d85a2b0fa4bcded913e2f49"} Nov 26 13:44:16 crc kubenswrapper[4747]: I1126 13:44:16.332331 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj8ql/must-gather-cthw6" event={"ID":"79bc3423-0e25-484f-b2a7-ce30a75962f4","Type":"ContainerStarted","Data":"dac11c30d4de2bb818eba9d04b65c47d24723c752ea22f8781df863315dca97a"} Nov 26 13:44:16 crc kubenswrapper[4747]: I1126 13:44:16.350107 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cj8ql/must-gather-cthw6" podStartSLOduration=2.311135278 podStartE2EDuration="6.350086866s" podCreationTimestamp="2025-11-26 13:44:10 +0000 UTC" firstStartedPulling="2025-11-26 13:44:11.497095547 +0000 UTC m=+1738.483406562" lastFinishedPulling="2025-11-26 13:44:15.536047135 +0000 UTC m=+1742.522358150" observedRunningTime="2025-11-26 13:44:16.345783558 +0000 UTC m=+1743.332094573" watchObservedRunningTime="2025-11-26 13:44:16.350086866 +0000 UTC m=+1743.336397881" Nov 26 13:44:18 crc kubenswrapper[4747]: I1126 13:44:18.365312 4747 scope.go:117] "RemoveContainer" containerID="b2c13feae87847099a250004ba8d776e0127bfb5a2a95783b077b348ce09a35f" Nov 26 13:44:18 crc kubenswrapper[4747]: I1126 13:44:18.414686 4747 scope.go:117] "RemoveContainer" containerID="13c86963611eeb04c6360f4cbd785ace25f6213706702da76d6f1eebcaa94a1e" Nov 26 13:44:18 crc kubenswrapper[4747]: I1126 13:44:18.431834 4747 scope.go:117] "RemoveContainer" containerID="cc44cfb85cf287b24227e8d8c63ae7c3e15268ef183bbe3e96bbbcf8e1e8230e" Nov 26 13:44:18 crc kubenswrapper[4747]: I1126 13:44:18.498629 4747 scope.go:117] "RemoveContainer" containerID="09921062aed88cd083ad60529b58e93ebbe48c3431d50796d9fcb21766054739" Nov 26 13:44:18 crc kubenswrapper[4747]: I1126 13:44:18.519442 4747 scope.go:117] "RemoveContainer" containerID="1f5229f57606a42c709ed24c531331d56023e0f638ec6de43dcf6e13cfe7efb3" Nov 26 13:44:22 crc kubenswrapper[4747]: I1126 13:44:22.797885 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:44:22 crc kubenswrapper[4747]: E1126 13:44:22.798632 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:44:36 crc kubenswrapper[4747]: I1126 13:44:36.797684 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:44:36 crc kubenswrapper[4747]: E1126 13:44:36.798500 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:44:48 crc kubenswrapper[4747]: I1126 13:44:48.863344 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv_55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1/util/0.log" Nov 26 13:44:48 crc kubenswrapper[4747]: I1126 13:44:48.985567 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv_55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1/util/0.log" Nov 26 13:44:48 crc kubenswrapper[4747]: I1126 13:44:48.987965 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv_55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1/pull/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.028627 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv_55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1/pull/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.177075 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv_55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1/pull/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.177532 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv_55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1/util/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.178728 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0c9e306bbb596fa696774558978b0ca0a789408cf8be49b60a348947bckkvgv_55715f5c-5a0d-4d68-9a7f-c4918d4fe9d1/extract/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.329450 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr_751ef444-9117-4506-bf1e-9d7605d07991/util/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.481817 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr_751ef444-9117-4506-bf1e-9d7605d07991/util/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.482783 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr_751ef444-9117-4506-bf1e-9d7605d07991/pull/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.499695 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr_751ef444-9117-4506-bf1e-9d7605d07991/pull/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.686400 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr_751ef444-9117-4506-bf1e-9d7605d07991/pull/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.688185 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr_751ef444-9117-4506-bf1e-9d7605d07991/util/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.709982 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fmptkr_751ef444-9117-4506-bf1e-9d7605d07991/extract/0.log" Nov 26 13:44:49 crc kubenswrapper[4747]: I1126 13:44:49.843142 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf_da964da5-781d-46be-a7f8-3f0151d77c22/util/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.089890 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf_da964da5-781d-46be-a7f8-3f0151d77c22/util/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.126702 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf_da964da5-781d-46be-a7f8-3f0151d77c22/pull/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.133196 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf_da964da5-781d-46be-a7f8-3f0151d77c22/pull/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.255846 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf_da964da5-781d-46be-a7f8-3f0151d77c22/util/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.298929 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf_da964da5-781d-46be-a7f8-3f0151d77c22/extract/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.310937 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dlx7wf_da964da5-781d-46be-a7f8-3f0151d77c22/pull/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.432605 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh_94358332-1a8d-4750-8383-8c05da245874/util/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.602596 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh_94358332-1a8d-4750-8383-8c05da245874/util/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.640010 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh_94358332-1a8d-4750-8383-8c05da245874/pull/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.650496 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh_94358332-1a8d-4750-8383-8c05da245874/pull/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.797561 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh_94358332-1a8d-4750-8383-8c05da245874/util/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.842164 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh_94358332-1a8d-4750-8383-8c05da245874/pull/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.851494 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bvwzbh_94358332-1a8d-4750-8383-8c05da245874/extract/0.log" Nov 26 13:44:50 crc kubenswrapper[4747]: I1126 13:44:50.962687 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt_11e289a8-87c2-4f73-920c-da94be9d0642/util/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.096998 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt_11e289a8-87c2-4f73-920c-da94be9d0642/util/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.117403 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt_11e289a8-87c2-4f73-920c-da94be9d0642/pull/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.156385 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt_11e289a8-87c2-4f73-920c-da94be9d0642/pull/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.288668 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt_11e289a8-87c2-4f73-920c-da94be9d0642/util/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.304163 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt_11e289a8-87c2-4f73-920c-da94be9d0642/extract/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.308410 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590xsjgt_11e289a8-87c2-4f73-920c-da94be9d0642/pull/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.453616 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5_f7ce603f-cc97-45cd-afb6-c8397b5688e6/util/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.632682 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5_f7ce603f-cc97-45cd-afb6-c8397b5688e6/pull/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.648840 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5_f7ce603f-cc97-45cd-afb6-c8397b5688e6/pull/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.726941 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5_f7ce603f-cc97-45cd-afb6-c8397b5688e6/util/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.798999 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:44:51 crc kubenswrapper[4747]: E1126 13:44:51.799277 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.822366 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5_f7ce603f-cc97-45cd-afb6-c8397b5688e6/util/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.834194 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5_f7ce603f-cc97-45cd-afb6-c8397b5688e6/extract/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.879616 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368cvbnp5_f7ce603f-cc97-45cd-afb6-c8397b5688e6/pull/0.log" Nov 26 13:44:51 crc kubenswrapper[4747]: I1126 13:44:51.974336 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc_643ba26b-247f-4601-a33e-60a171de4a37/util/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.057087 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-create-npbxv"] Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.065417 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-4e67-account-create-update-gzgls"] Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.070827 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-4e67-account-create-update-gzgls"] Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.076564 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-create-npbxv"] Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.185793 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc_643ba26b-247f-4601-a33e-60a171de4a37/pull/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.203763 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc_643ba26b-247f-4601-a33e-60a171de4a37/util/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.216282 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc_643ba26b-247f-4601-a33e-60a171de4a37/pull/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.375594 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc_643ba26b-247f-4601-a33e-60a171de4a37/util/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.378784 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc_643ba26b-247f-4601-a33e-60a171de4a37/pull/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.431930 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3sg5vc_643ba26b-247f-4601-a33e-60a171de4a37/extract/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.578095 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-index-5zwq9_c34b19b7-bcf1-4d32-8f5f-e5596cac7b63/registry-server/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.636842 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5c788c94db-hqzsn_68495c62-1417-4aaf-8a34-08ba62255819/manager/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.683190 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-56c4598b9-2tw2c_f9344af4-6999-4694-8a62-3541837564f0/manager/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.789005 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-index-8zsmn_1f50f300-6e38-4c1e-9679-76ee15a7dcd1/registry-server/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.841326 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-b4655cf54-jl25v_8a1ede6d-b2cb-4d1f-857a-4307ba531806/kube-rbac-proxy/0.log" Nov 26 13:44:52 crc kubenswrapper[4747]: I1126 13:44:52.944825 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-b4655cf54-jl25v_8a1ede6d-b2cb-4d1f-857a-4307ba531806/manager/0.log" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.001724 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-index-s2drq_ee9a1424-ac63-4784-bed5-19b03117eb93/registry-server/0.log" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.145333 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-9bbbdb54c-nzgl2_f06c40e2-33fe-4c14-9f47-5b7ea72c2582/manager/0.log" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.189647 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-index-zzjrs_5dddeb24-eccf-47cc-a06d-302ac1cf1c1e/registry-server/0.log" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.249001 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-d6b588f4-b64k9_399f9a79-e1f0-4fcd-8378-f0dee4361f7c/manager/0.log" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.350190 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-index-gnpfp_ce8a44e5-90cc-4c67-a1f5-fa9b751036de/registry-server/0.log" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.426268 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-779fc9694b-lgbt4_af5d51b6-54ab-4dd7-a34f-74398364c04c/operator/0.log" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.513532 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-index-nw29v_0cd030c4-2860-4ff1-a643-17a38f5cb419/registry-server/0.log" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.584017 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7fd68b5878-x5z6g_55113f3f-9d43-4598-8b6b-34ec136691d8/manager/0.log" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.650649 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-index-s8hf9_45a71a83-66e4-445d-8ec2-b2aaee755942/registry-server/0.log" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.806500 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add568ec-ad1a-489b-a5b3-802f0cc37a8a" path="/var/lib/kubelet/pods/add568ec-ad1a-489b-a5b3-802f0cc37a8a/volumes" Nov 26 13:44:53 crc kubenswrapper[4747]: I1126 13:44:53.807462 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78c2b07-9725-4820-81af-7f684201bece" path="/var/lib/kubelet/pods/d78c2b07-9725-4820-81af-7f684201bece/volumes" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.142703 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz"] Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.143649 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.149524 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.149637 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.152624 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz"] Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.262975 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef62b83-12e0-4a37-8cea-73ec98f7250b-config-volume\") pod \"collect-profiles-29402745-hxvkz\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.263253 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef62b83-12e0-4a37-8cea-73ec98f7250b-secret-volume\") pod \"collect-profiles-29402745-hxvkz\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.263317 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kjbg\" (UniqueName: \"kubernetes.io/projected/2ef62b83-12e0-4a37-8cea-73ec98f7250b-kube-api-access-9kjbg\") pod \"collect-profiles-29402745-hxvkz\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.364549 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef62b83-12e0-4a37-8cea-73ec98f7250b-config-volume\") pod \"collect-profiles-29402745-hxvkz\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.365222 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef62b83-12e0-4a37-8cea-73ec98f7250b-secret-volume\") pod \"collect-profiles-29402745-hxvkz\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.365386 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kjbg\" (UniqueName: \"kubernetes.io/projected/2ef62b83-12e0-4a37-8cea-73ec98f7250b-kube-api-access-9kjbg\") pod \"collect-profiles-29402745-hxvkz\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.365518 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef62b83-12e0-4a37-8cea-73ec98f7250b-config-volume\") pod \"collect-profiles-29402745-hxvkz\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.382237 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef62b83-12e0-4a37-8cea-73ec98f7250b-secret-volume\") pod \"collect-profiles-29402745-hxvkz\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.386046 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kjbg\" (UniqueName: \"kubernetes.io/projected/2ef62b83-12e0-4a37-8cea-73ec98f7250b-kube-api-access-9kjbg\") pod \"collect-profiles-29402745-hxvkz\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.468555 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:00 crc kubenswrapper[4747]: I1126 13:45:00.959063 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz"] Nov 26 13:45:01 crc kubenswrapper[4747]: I1126 13:45:01.688684 4747 generic.go:334] "Generic (PLEG): container finished" podID="2ef62b83-12e0-4a37-8cea-73ec98f7250b" containerID="14ae43039ba7836dd6564b9af057838d3bab3324ad8b2aa72cb0cf9627e78991" exitCode=0 Nov 26 13:45:01 crc kubenswrapper[4747]: I1126 13:45:01.689093 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" event={"ID":"2ef62b83-12e0-4a37-8cea-73ec98f7250b","Type":"ContainerDied","Data":"14ae43039ba7836dd6564b9af057838d3bab3324ad8b2aa72cb0cf9627e78991"} Nov 26 13:45:01 crc kubenswrapper[4747]: I1126 13:45:01.689122 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" event={"ID":"2ef62b83-12e0-4a37-8cea-73ec98f7250b","Type":"ContainerStarted","Data":"0ec6898df9fdc2332ca9281b7baf6c4eb312d5d4f4047f9d05edca26215e121a"} Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.001389 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.101789 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef62b83-12e0-4a37-8cea-73ec98f7250b-secret-volume\") pod \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.101856 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kjbg\" (UniqueName: \"kubernetes.io/projected/2ef62b83-12e0-4a37-8cea-73ec98f7250b-kube-api-access-9kjbg\") pod \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.101902 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef62b83-12e0-4a37-8cea-73ec98f7250b-config-volume\") pod \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\" (UID: \"2ef62b83-12e0-4a37-8cea-73ec98f7250b\") " Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.102581 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef62b83-12e0-4a37-8cea-73ec98f7250b-config-volume" (OuterVolumeSpecName: "config-volume") pod "2ef62b83-12e0-4a37-8cea-73ec98f7250b" (UID: "2ef62b83-12e0-4a37-8cea-73ec98f7250b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.107205 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef62b83-12e0-4a37-8cea-73ec98f7250b-kube-api-access-9kjbg" (OuterVolumeSpecName: "kube-api-access-9kjbg") pod "2ef62b83-12e0-4a37-8cea-73ec98f7250b" (UID: "2ef62b83-12e0-4a37-8cea-73ec98f7250b"). InnerVolumeSpecName "kube-api-access-9kjbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.107855 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef62b83-12e0-4a37-8cea-73ec98f7250b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2ef62b83-12e0-4a37-8cea-73ec98f7250b" (UID: "2ef62b83-12e0-4a37-8cea-73ec98f7250b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.203568 4747 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef62b83-12e0-4a37-8cea-73ec98f7250b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.203609 4747 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef62b83-12e0-4a37-8cea-73ec98f7250b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.203623 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kjbg\" (UniqueName: \"kubernetes.io/projected/2ef62b83-12e0-4a37-8cea-73ec98f7250b-kube-api-access-9kjbg\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.707803 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" event={"ID":"2ef62b83-12e0-4a37-8cea-73ec98f7250b","Type":"ContainerDied","Data":"0ec6898df9fdc2332ca9281b7baf6c4eb312d5d4f4047f9d05edca26215e121a"} Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.707868 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec6898df9fdc2332ca9281b7baf6c4eb312d5d4f4047f9d05edca26215e121a" Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.707881 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-hxvkz" Nov 26 13:45:03 crc kubenswrapper[4747]: I1126 13:45:03.801920 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:45:03 crc kubenswrapper[4747]: E1126 13:45:03.802264 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:45:07 crc kubenswrapper[4747]: I1126 13:45:07.476355 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zw88v_f637a777-d9a6-44b1-a6fd-de227846cf5b/control-plane-machine-set-operator/0.log" Nov 26 13:45:07 crc kubenswrapper[4747]: I1126 13:45:07.596454 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pdp9j_480f682a-63f7-4ef6-b10c-29c34222269b/kube-rbac-proxy/0.log" Nov 26 13:45:07 crc kubenswrapper[4747]: I1126 13:45:07.671048 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pdp9j_480f682a-63f7-4ef6-b10c-29c34222269b/machine-api-operator/0.log" Nov 26 13:45:16 crc kubenswrapper[4747]: I1126 13:45:16.798811 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:45:16 crc kubenswrapper[4747]: E1126 13:45:16.799715 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:45:18 crc kubenswrapper[4747]: I1126 13:45:18.631845 4747 scope.go:117] "RemoveContainer" containerID="e9956c8ebff7b1e13b5e6d147613c3662d655767d0b3349cdc4c08bd9e9578c6" Nov 26 13:45:18 crc kubenswrapper[4747]: I1126 13:45:18.660647 4747 scope.go:117] "RemoveContainer" containerID="6fb3a0c0fe75857bc683dbfd91bf7997915d18b0c1fd72387be2f7dbf4fb37d3" Nov 26 13:45:18 crc kubenswrapper[4747]: I1126 13:45:18.688795 4747 scope.go:117] "RemoveContainer" containerID="822648aff5df9a047f56824773596120f2f81c70a9cc286954018bfad021d673" Nov 26 13:45:18 crc kubenswrapper[4747]: I1126 13:45:18.728933 4747 scope.go:117] "RemoveContainer" containerID="2900c5a89dc941d8beafed5154b9ebeec5c0a662189c2b0cb6f25cd3f8e0a21f" Nov 26 13:45:18 crc kubenswrapper[4747]: I1126 13:45:18.762571 4747 scope.go:117] "RemoveContainer" containerID="0ae81ae763777e57cbaf96c6e8e88be24e9866759631d8db3cebacbb49e5bfe6" Nov 26 13:45:18 crc kubenswrapper[4747]: I1126 13:45:18.788121 4747 scope.go:117] "RemoveContainer" containerID="e706477bb894df737ffd1dd9b6bfd6d51e63ad14d544d1df8d54207434b52f93" Nov 26 13:45:22 crc kubenswrapper[4747]: I1126 13:45:22.594012 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-g98d5_e0573c54-5abb-4857-8b6d-dcdcc168f1a0/kube-rbac-proxy/0.log" Nov 26 13:45:22 crc kubenswrapper[4747]: I1126 13:45:22.595072 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-g98d5_e0573c54-5abb-4857-8b6d-dcdcc168f1a0/controller/0.log" Nov 26 13:45:22 crc kubenswrapper[4747]: I1126 13:45:22.753923 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-frr-files/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.041310 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-frr-files/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.076343 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-reloader/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.080551 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-reloader/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.083383 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-metrics/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.287712 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-frr-files/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.330291 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-metrics/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.355406 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-reloader/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.362319 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-metrics/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.538604 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-metrics/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.539845 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-frr-files/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.544900 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/cp-reloader/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.568853 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/controller/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.717540 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/kube-rbac-proxy/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.717909 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/frr-metrics/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.789307 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/kube-rbac-proxy-frr/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.964826 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/reloader/0.log" Nov 26 13:45:23 crc kubenswrapper[4747]: I1126 13:45:23.969714 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-k77x5_45f60cce-70b3-4e45-98a4-c66edcca9e65/frr-k8s-webhook-server/0.log" Nov 26 13:45:24 crc kubenswrapper[4747]: I1126 13:45:24.184969 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6db6754d4-6csp2_78093636-f156-4295-918f-8aa7278c3f69/manager/0.log" Nov 26 13:45:24 crc kubenswrapper[4747]: I1126 13:45:24.355792 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87pwf_a376e01b-77a9-4bcc-af7c-34a3994a5b20/frr/0.log" Nov 26 13:45:24 crc kubenswrapper[4747]: I1126 13:45:24.392545 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-768865bcf6-7wrrc_1a251542-86df-4644-acf8-6dd3d58697ad/webhook-server/0.log" Nov 26 13:45:24 crc kubenswrapper[4747]: I1126 13:45:24.450256 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pds5h_84ac964b-de2c-43b9-ae3c-c7ea157287dd/kube-rbac-proxy/0.log" Nov 26 13:45:24 crc kubenswrapper[4747]: I1126 13:45:24.689213 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pds5h_84ac964b-de2c-43b9-ae3c-c7ea157287dd/speaker/0.log" Nov 26 13:45:27 crc kubenswrapper[4747]: I1126 13:45:27.798631 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:45:27 crc kubenswrapper[4747]: E1126 13:45:27.798978 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:45:37 crc kubenswrapper[4747]: I1126 13:45:37.136526 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-db-create-5dxfx_b7da5be4-98ba-46b8-a15b-db43a4b12968/mariadb-database-create/0.log" Nov 26 13:45:37 crc kubenswrapper[4747]: I1126 13:45:37.239302 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-db-sync-79x4z_d98ec4ca-6da2-4622-a71f-64476e8ff68c/glance-db-sync/0.log" Nov 26 13:45:37 crc kubenswrapper[4747]: I1126 13:45:37.297071 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-single-0_464b8c3a-297c-4771-b17d-31e39d5bddae/glance-httpd/0.log" Nov 26 13:45:37 crc kubenswrapper[4747]: I1126 13:45:37.399279 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-single-0_464b8c3a-297c-4771-b17d-31e39d5bddae/glance-log/0.log" Nov 26 13:45:37 crc kubenswrapper[4747]: I1126 13:45:37.496478 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-single-1_3f6318a7-4b12-49c6-9882-96c4d70893e8/glance-httpd/0.log" Nov 26 13:45:37 crc kubenswrapper[4747]: I1126 13:45:37.549518 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-single-1_3f6318a7-4b12-49c6-9882-96c4d70893e8/glance-log/0.log" Nov 26 13:45:37 crc kubenswrapper[4747]: I1126 13:45:37.640546 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-e5bb-account-create-update-bxznw_e5c06574-7633-4b77-aea8-c99b389ef6e0/mariadb-account-create-update/0.log" Nov 26 13:45:37 crc kubenswrapper[4747]: I1126 13:45:37.881538 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_keystone-bootstrap-79j8m_2d731e42-04bc-46f7-b946-fa91ee2b54e2/keystone-bootstrap/0.log" Nov 26 13:45:38 crc kubenswrapper[4747]: I1126 13:45:38.042437 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_keystone-db-sync-62sk4_81d595c7-cabd-4c96-a992-a417f16b449b/keystone-db-sync/0.log" Nov 26 13:45:38 crc kubenswrapper[4747]: I1126 13:45:38.044117 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_keystone-6587955ddb-c2fhn_9e011df0-47d0-4e00-ad3a-044212d955b1/keystone-api/0.log" Nov 26 13:45:38 crc kubenswrapper[4747]: I1126 13:45:38.243110 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_09efed01-5e87-440c-aafe-bc16617e8bfd/mysql-bootstrap/0.log" Nov 26 13:45:38 crc kubenswrapper[4747]: I1126 13:45:38.462076 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_09efed01-5e87-440c-aafe-bc16617e8bfd/mysql-bootstrap/0.log" Nov 26 13:45:38 crc kubenswrapper[4747]: I1126 13:45:38.496009 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_09efed01-5e87-440c-aafe-bc16617e8bfd/galera/0.log" Nov 26 13:45:38 crc kubenswrapper[4747]: I1126 13:45:38.639378 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_c052d6ba-7cf0-4d98-8c38-be1d7afccafa/mysql-bootstrap/0.log" Nov 26 13:45:38 crc kubenswrapper[4747]: I1126 13:45:38.897632 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_c052d6ba-7cf0-4d98-8c38-be1d7afccafa/galera/0.log" Nov 26 13:45:38 crc kubenswrapper[4747]: I1126 13:45:38.950124 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_c052d6ba-7cf0-4d98-8c38-be1d7afccafa/mysql-bootstrap/0.log" Nov 26 13:45:39 crc kubenswrapper[4747]: I1126 13:45:39.144288 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_298454c7-93bf-41be-877e-9f3e27f47119/mysql-bootstrap/0.log" Nov 26 13:45:39 crc kubenswrapper[4747]: I1126 13:45:39.396597 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_298454c7-93bf-41be-877e-9f3e27f47119/mysql-bootstrap/0.log" Nov 26 13:45:39 crc kubenswrapper[4747]: I1126 13:45:39.454011 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_298454c7-93bf-41be-877e-9f3e27f47119/galera/0.log" Nov 26 13:45:39 crc kubenswrapper[4747]: I1126 13:45:39.725406 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstackclient_b12599e7-b7ee-490d-b697-e1c48161d7a6/openstackclient/0.log" Nov 26 13:45:39 crc kubenswrapper[4747]: I1126 13:45:39.863195 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21/setup-container/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.043884 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21/rabbitmq/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.048928 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_memcached-0_fa5963aa-3702-4432-a430-e6716768ed8c/memcached/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.056777 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_e8babba1-d2a3-4b9c-9bfe-c1a4b20c6a21/setup-container/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.233752 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-proxy-6bd58cfcf7-flzjr_e67117b2-ffe7-4796-8b7b-6f0065a87846/proxy-server/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.247426 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-proxy-6bd58cfcf7-flzjr_e67117b2-ffe7-4796-8b7b-6f0065a87846/proxy-httpd/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.346420 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-ring-rebalance-pc4vw_52f178ee-ada4-42e2-85c0-a4ebf5322d5d/swift-ring-rebalance/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.475992 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/account-reaper/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.476026 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/account-auditor/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.541723 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/account-replicator/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.653891 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/container-auditor/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.658744 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/account-server/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.681905 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/container-replicator/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.743095 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/container-server/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.798665 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:45:40 crc kubenswrapper[4747]: E1126 13:45:40.798951 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.847331 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/object-auditor/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.847868 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/container-updater/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.889684 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/object-replicator/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.897591 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/object-expirer/0.log" Nov 26 13:45:40 crc kubenswrapper[4747]: I1126 13:45:40.936097 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/object-server/0.log" Nov 26 13:45:41 crc kubenswrapper[4747]: I1126 13:45:41.028971 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/rsync/0.log" Nov 26 13:45:41 crc kubenswrapper[4747]: I1126 13:45:41.029262 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/object-updater/0.log" Nov 26 13:45:41 crc kubenswrapper[4747]: I1126 13:45:41.061444 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_0c003abf-6288-4d54-8c91-07c1eebe0123/swift-recon-cron/0.log" Nov 26 13:45:51 crc kubenswrapper[4747]: I1126 13:45:51.030290 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-62sk4"] Nov 26 13:45:51 crc kubenswrapper[4747]: I1126 13:45:51.035798 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-62sk4"] Nov 26 13:45:51 crc kubenswrapper[4747]: I1126 13:45:51.808288 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d595c7-cabd-4c96-a992-a417f16b449b" path="/var/lib/kubelet/pods/81d595c7-cabd-4c96-a992-a417f16b449b/volumes" Nov 26 13:45:52 crc kubenswrapper[4747]: I1126 13:45:52.957628 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6l85_e75bd453-1704-4c67-aea4-e184f0a8b320/extract-utilities/0.log" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.148185 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6l85_e75bd453-1704-4c67-aea4-e184f0a8b320/extract-content/0.log" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.173868 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6l85_e75bd453-1704-4c67-aea4-e184f0a8b320/extract-content/0.log" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.180429 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6l85_e75bd453-1704-4c67-aea4-e184f0a8b320/extract-utilities/0.log" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.323989 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6l85_e75bd453-1704-4c67-aea4-e184f0a8b320/extract-utilities/0.log" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.369653 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6l85_e75bd453-1704-4c67-aea4-e184f0a8b320/extract-content/0.log" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.610395 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-96vsv_d138c6d5-a89a-4ae6-9c33-dd7ac74ee466/extract-utilities/0.log" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.735391 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-96vsv_d138c6d5-a89a-4ae6-9c33-dd7ac74ee466/extract-utilities/0.log" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.766963 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6l85_e75bd453-1704-4c67-aea4-e184f0a8b320/registry-server/0.log" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.790436 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-96vsv_d138c6d5-a89a-4ae6-9c33-dd7ac74ee466/extract-content/0.log" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.807777 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:45:53 crc kubenswrapper[4747]: E1126 13:45:53.808032 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hjc55_openshift-machine-config-operator(b021e3b3-27be-4500-8dae-e5cd31ba8405)\"" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" Nov 26 13:45:53 crc kubenswrapper[4747]: I1126 13:45:53.886347 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-96vsv_d138c6d5-a89a-4ae6-9c33-dd7ac74ee466/extract-content/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.042996 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-96vsv_d138c6d5-a89a-4ae6-9c33-dd7ac74ee466/extract-utilities/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.069217 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-96vsv_d138c6d5-a89a-4ae6-9c33-dd7ac74ee466/extract-content/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.307585 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb_6e988ef3-c6be-4380-853f-95039903e425/util/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.404055 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-96vsv_d138c6d5-a89a-4ae6-9c33-dd7ac74ee466/registry-server/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.449148 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb_6e988ef3-c6be-4380-853f-95039903e425/pull/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.482928 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb_6e988ef3-c6be-4380-853f-95039903e425/util/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.530204 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb_6e988ef3-c6be-4380-853f-95039903e425/pull/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.687538 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb_6e988ef3-c6be-4380-853f-95039903e425/util/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.727801 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb_6e988ef3-c6be-4380-853f-95039903e425/pull/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.764798 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67d4bb_6e988ef3-c6be-4380-853f-95039903e425/extract/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.918480 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mqnn7_20bd504f-0b9c-407b-968c-a2ef32da0158/marketplace-operator/0.log" Nov 26 13:45:54 crc kubenswrapper[4747]: I1126 13:45:54.925176 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjwfc_01124784-f0ee-42b1-82d1-18b591ee2e88/extract-utilities/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.100022 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjwfc_01124784-f0ee-42b1-82d1-18b591ee2e88/extract-content/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.134504 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjwfc_01124784-f0ee-42b1-82d1-18b591ee2e88/extract-utilities/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.169257 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjwfc_01124784-f0ee-42b1-82d1-18b591ee2e88/extract-content/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.279948 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjwfc_01124784-f0ee-42b1-82d1-18b591ee2e88/extract-content/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.325190 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjwfc_01124784-f0ee-42b1-82d1-18b591ee2e88/extract-utilities/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.372085 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjwfc_01124784-f0ee-42b1-82d1-18b591ee2e88/registry-server/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.444039 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ppv6b_542f8b1f-6f29-4d0c-83b4-9aadfba039ff/extract-utilities/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.653620 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ppv6b_542f8b1f-6f29-4d0c-83b4-9aadfba039ff/extract-utilities/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.671286 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ppv6b_542f8b1f-6f29-4d0c-83b4-9aadfba039ff/extract-content/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.684869 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ppv6b_542f8b1f-6f29-4d0c-83b4-9aadfba039ff/extract-content/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.818463 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ppv6b_542f8b1f-6f29-4d0c-83b4-9aadfba039ff/extract-content/0.log" Nov 26 13:45:55 crc kubenswrapper[4747]: I1126 13:45:55.835675 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ppv6b_542f8b1f-6f29-4d0c-83b4-9aadfba039ff/extract-utilities/0.log" Nov 26 13:45:56 crc kubenswrapper[4747]: I1126 13:45:56.089951 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ppv6b_542f8b1f-6f29-4d0c-83b4-9aadfba039ff/registry-server/0.log" Nov 26 13:45:58 crc kubenswrapper[4747]: I1126 13:45:58.026203 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-79j8m"] Nov 26 13:45:58 crc kubenswrapper[4747]: I1126 13:45:58.033723 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-79j8m"] Nov 26 13:45:59 crc kubenswrapper[4747]: I1126 13:45:59.808226 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d731e42-04bc-46f7-b946-fa91ee2b54e2" path="/var/lib/kubelet/pods/2d731e42-04bc-46f7-b946-fa91ee2b54e2/volumes" Nov 26 13:46:05 crc kubenswrapper[4747]: I1126 13:46:05.798465 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80" Nov 26 13:46:06 crc kubenswrapper[4747]: I1126 13:46:06.180844 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"77e0008e16ee441be6da19a07c5f5dac233f9b96f56a74c4e809bff86953013e"} Nov 26 13:46:18 crc kubenswrapper[4747]: I1126 13:46:18.926361 4747 scope.go:117] "RemoveContainer" containerID="3a95b1f24bd01352a4b4050d2b436c62a4d82a2178db7b9ad9bb709afb7238f8" Nov 26 13:46:18 crc kubenswrapper[4747]: I1126 13:46:18.973700 4747 scope.go:117] "RemoveContainer" containerID="551ce22e18f0dd11863a0a41ff624039db36caa2bea8bd0aec4e28124c556542" Nov 26 13:46:18 crc kubenswrapper[4747]: I1126 13:46:18.989657 4747 scope.go:117] "RemoveContainer" containerID="9efb264505d248ab1f95e981c3cdd12853ff1c17741c062bee78261d473864f7" Nov 26 13:46:19 crc kubenswrapper[4747]: I1126 13:46:19.050759 4747 scope.go:117] "RemoveContainer" containerID="e9ab3bfabd07fcdf073154d0daab727a7bd226a9cfb6d4357df97ff823cb8de8" Nov 26 13:46:19 crc kubenswrapper[4747]: I1126 13:46:19.083586 4747 scope.go:117] "RemoveContainer" containerID="503a5fb78966ae653a1187f7a06fe171c8bb92e92bef16bf88405bd387d3c8e3" Nov 26 13:46:19 crc kubenswrapper[4747]: I1126 13:46:19.106088 4747 scope.go:117] "RemoveContainer" containerID="1c057fe2f7e648b74dacd4c71a4bb64afcbaffb9fc2fa8d1ebb317ae780a138a" Nov 26 13:47:02 crc kubenswrapper[4747]: I1126 13:47:02.616766 4747 generic.go:334] "Generic (PLEG): container finished" podID="79bc3423-0e25-484f-b2a7-ce30a75962f4" containerID="dac11c30d4de2bb818eba9d04b65c47d24723c752ea22f8781df863315dca97a" exitCode=0 Nov 26 13:47:02 crc kubenswrapper[4747]: I1126 13:47:02.616844 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj8ql/must-gather-cthw6" event={"ID":"79bc3423-0e25-484f-b2a7-ce30a75962f4","Type":"ContainerDied","Data":"dac11c30d4de2bb818eba9d04b65c47d24723c752ea22f8781df863315dca97a"} Nov 26 13:47:02 crc kubenswrapper[4747]: I1126 13:47:02.617781 4747 scope.go:117] "RemoveContainer" containerID="dac11c30d4de2bb818eba9d04b65c47d24723c752ea22f8781df863315dca97a" Nov 26 13:47:03 crc kubenswrapper[4747]: I1126 13:47:03.473974 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cj8ql_must-gather-cthw6_79bc3423-0e25-484f-b2a7-ce30a75962f4/gather/0.log" Nov 26 13:47:10 crc kubenswrapper[4747]: I1126 13:47:10.402150 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cj8ql/must-gather-cthw6"] Nov 26 13:47:10 crc kubenswrapper[4747]: I1126 13:47:10.403012 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cj8ql/must-gather-cthw6" podUID="79bc3423-0e25-484f-b2a7-ce30a75962f4" containerName="copy" containerID="cri-o://cd8025e63c3b3751fd7adbd79835fef6c4137bb90d85a2b0fa4bcded913e2f49" gracePeriod=2 Nov 26 13:47:10 crc kubenswrapper[4747]: I1126 13:47:10.409189 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cj8ql/must-gather-cthw6"] Nov 26 13:47:10 crc kubenswrapper[4747]: I1126 13:47:10.693312 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cj8ql_must-gather-cthw6_79bc3423-0e25-484f-b2a7-ce30a75962f4/copy/0.log" Nov 26 13:47:10 crc kubenswrapper[4747]: I1126 13:47:10.693864 4747 generic.go:334] "Generic (PLEG): container finished" podID="79bc3423-0e25-484f-b2a7-ce30a75962f4" containerID="cd8025e63c3b3751fd7adbd79835fef6c4137bb90d85a2b0fa4bcded913e2f49" exitCode=143 Nov 26 13:47:10 crc kubenswrapper[4747]: I1126 13:47:10.771592 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cj8ql_must-gather-cthw6_79bc3423-0e25-484f-b2a7-ce30a75962f4/copy/0.log" Nov 26 13:47:10 crc kubenswrapper[4747]: I1126 13:47:10.771881 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj8ql/must-gather-cthw6" Nov 26 13:47:10 crc kubenswrapper[4747]: I1126 13:47:10.961114 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79bc3423-0e25-484f-b2a7-ce30a75962f4-must-gather-output\") pod \"79bc3423-0e25-484f-b2a7-ce30a75962f4\" (UID: \"79bc3423-0e25-484f-b2a7-ce30a75962f4\") " Nov 26 13:47:10 crc kubenswrapper[4747]: I1126 13:47:10.972775 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4852\" (UniqueName: \"kubernetes.io/projected/79bc3423-0e25-484f-b2a7-ce30a75962f4-kube-api-access-p4852\") pod \"79bc3423-0e25-484f-b2a7-ce30a75962f4\" (UID: \"79bc3423-0e25-484f-b2a7-ce30a75962f4\") " Nov 26 13:47:10 crc kubenswrapper[4747]: I1126 13:47:10.997258 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79bc3423-0e25-484f-b2a7-ce30a75962f4-kube-api-access-p4852" (OuterVolumeSpecName: "kube-api-access-p4852") pod "79bc3423-0e25-484f-b2a7-ce30a75962f4" (UID: "79bc3423-0e25-484f-b2a7-ce30a75962f4"). InnerVolumeSpecName "kube-api-access-p4852". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:11 crc kubenswrapper[4747]: I1126 13:47:11.062395 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79bc3423-0e25-484f-b2a7-ce30a75962f4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "79bc3423-0e25-484f-b2a7-ce30a75962f4" (UID: "79bc3423-0e25-484f-b2a7-ce30a75962f4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:11 crc kubenswrapper[4747]: I1126 13:47:11.080544 4747 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79bc3423-0e25-484f-b2a7-ce30a75962f4-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:11 crc kubenswrapper[4747]: I1126 13:47:11.080588 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4852\" (UniqueName: \"kubernetes.io/projected/79bc3423-0e25-484f-b2a7-ce30a75962f4-kube-api-access-p4852\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:11 crc kubenswrapper[4747]: I1126 13:47:11.701910 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cj8ql_must-gather-cthw6_79bc3423-0e25-484f-b2a7-ce30a75962f4/copy/0.log" Nov 26 13:47:11 crc kubenswrapper[4747]: I1126 13:47:11.703969 4747 scope.go:117] "RemoveContainer" containerID="cd8025e63c3b3751fd7adbd79835fef6c4137bb90d85a2b0fa4bcded913e2f49" Nov 26 13:47:11 crc kubenswrapper[4747]: I1126 13:47:11.704022 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj8ql/must-gather-cthw6" Nov 26 13:47:11 crc kubenswrapper[4747]: I1126 13:47:11.723558 4747 scope.go:117] "RemoveContainer" containerID="dac11c30d4de2bb818eba9d04b65c47d24723c752ea22f8781df863315dca97a" Nov 26 13:47:11 crc kubenswrapper[4747]: I1126 13:47:11.806842 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79bc3423-0e25-484f-b2a7-ce30a75962f4" path="/var/lib/kubelet/pods/79bc3423-0e25-484f-b2a7-ce30a75962f4/volumes" Nov 26 13:48:19 crc kubenswrapper[4747]: I1126 13:48:19.245009 4747 scope.go:117] "RemoveContainer" containerID="b97de9da232f29d353e3c39c14c313b58587c293fb34e2f94b7ae5dc05ea9e07" Nov 26 13:48:19 crc kubenswrapper[4747]: I1126 13:48:19.271589 4747 scope.go:117] "RemoveContainer" containerID="01732291b9e401883f26f8f02a1ea169a1a5ad478ec2596bd7f585030f406e2a" Nov 26 13:48:19 crc kubenswrapper[4747]: I1126 13:48:19.298742 4747 scope.go:117] "RemoveContainer" containerID="517f5654426e06093bd2d02d69c46f756ccd8f31ceb2b6165c867e1cf30023b0" Nov 26 13:48:19 crc kubenswrapper[4747]: I1126 13:48:19.346629 4747 scope.go:117] "RemoveContainer" containerID="98601232955ea35f5e0f036578ba73b478105385469aea271d3951341a896da2" Nov 26 13:48:33 crc kubenswrapper[4747]: I1126 13:48:33.417445 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:48:33 crc kubenswrapper[4747]: I1126 13:48:33.418285 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:49:03 crc kubenswrapper[4747]: I1126 13:49:03.417605 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:49:03 crc kubenswrapper[4747]: I1126 13:49:03.419342 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:49:19 crc kubenswrapper[4747]: I1126 13:49:19.410176 4747 scope.go:117] "RemoveContainer" containerID="0db4076a5fbc6ad1d70edb83ac0f8214c0cea71d954c0e02a738fd1d47ed3ee8" Nov 26 13:49:33 crc kubenswrapper[4747]: I1126 13:49:33.417575 4747 patch_prober.go:28] interesting pod/machine-config-daemon-hjc55 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:49:33 crc kubenswrapper[4747]: I1126 13:49:33.418174 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:49:33 crc kubenswrapper[4747]: I1126 13:49:33.418227 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" Nov 26 13:49:33 crc kubenswrapper[4747]: I1126 13:49:33.418907 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77e0008e16ee441be6da19a07c5f5dac233f9b96f56a74c4e809bff86953013e"} pod="openshift-machine-config-operator/machine-config-daemon-hjc55" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:49:33 crc kubenswrapper[4747]: I1126 13:49:33.418965 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" podUID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerName="machine-config-daemon" containerID="cri-o://77e0008e16ee441be6da19a07c5f5dac233f9b96f56a74c4e809bff86953013e" gracePeriod=600 Nov 26 13:49:33 crc kubenswrapper[4747]: I1126 13:49:33.857215 4747 generic.go:334] "Generic (PLEG): container finished" podID="b021e3b3-27be-4500-8dae-e5cd31ba8405" containerID="77e0008e16ee441be6da19a07c5f5dac233f9b96f56a74c4e809bff86953013e" exitCode=0 Nov 26 13:49:33 crc kubenswrapper[4747]: I1126 13:49:33.857470 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerDied","Data":"77e0008e16ee441be6da19a07c5f5dac233f9b96f56a74c4e809bff86953013e"} Nov 26 13:49:33 crc kubenswrapper[4747]: I1126 13:49:33.857496 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjc55" event={"ID":"b021e3b3-27be-4500-8dae-e5cd31ba8405","Type":"ContainerStarted","Data":"e1c990f5ca93e75c0f1cf12ce9b0480a7d2eaede4b057781c560ea2274b375a8"} Nov 26 13:49:33 crc kubenswrapper[4747]: I1126 13:49:33.857511 4747 scope.go:117] "RemoveContainer" containerID="f0deeec456617c5de6a6c1084dbe57220605702b5e80a3db1a27ac09415ccc80"